[Yahoo-eng-team] [Bug 1710255] [NEW] Network actions should be enabled based on neutron policy

2017-08-11 Thread Ying Zuo
Public bug reported:

Currently the delete network, edit network, and create subnet actions
for a shared network are restricted to admin users only. Instead of
hardcoding this requirement, these actions should be enabled based on
the neutron policy.

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yingzuo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yingzuo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1710255

Title:
  Network actions should be enabled based on neutron policy

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently the delete network, edit network, and create subnet actions
  for a shared network are restricted to admin users only. Instead of
  hardcoding this requirement, these actions should be enabled based on
  the neutron policy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1710255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687187] Re: metadata-api requires iptables-save/restore

2017-08-11 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/ocata
 Assignee: (unassigned) => Sam Yaple (s8m)

** Changed in: nova/newton
   Status: New => In Progress

** Changed in: nova/newton
 Assignee: (unassigned) => Sam Yaple (s8m)

** Changed in: nova/newton
   Importance: Undecided => High

** Changed in: nova/ocata
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1687187

Title:
  metadata-api requires iptables-save/restore

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  The metadata-api still loads pieces of nova-network even when using
  neutron=True.

  Specifically, it is still loading linuxnet_interface_driver and it is
  adding in ACCEPT rules with iptables to allow the metadata port. While
  this may make sense with nova-network, it doesn't make sense for an
  api to be messing with iptables.

  Since neutron uses metadata-api through its proxy, it cannot be said
  that the metadata-api is purely a nova-network thing.

  The MetadataManager class that is loaded makes note of the fact that
  all the class does is add that ACCEPT rule [0]. Previously in Newton I
  was able to work around this by overriding the MetadataManager class
  with 'nova.manager.Manager', that that option was removed in Ocata
  [1]. Now the 'nova.api.manager.MetadataManager' name is hardcoded [2]
  and requires modifying nova source.

  TL;DR when using the metadata-api, bits of nova-network are still
  loaded when they shouldn't be.

  [0]
  
https://github.com/openstack/nova/blob/4f91ed3a547965ed96a22520edcfb783e7936e95/nova/api/manager.py#L24

  [1]
  https://github.com/openstack/nova/blob/stable/newton/nova/conf/service.py#L51

  [2]
  
https://github.com/openstack/nova/blob/065cd6a8d69c1ec862e5b402a3150131f35b2420/nova/service.py#L60

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1687187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659467] Re: Cannot start instances with neutrons multi-provider network

2017-08-11 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => Incomplete

** Changed in: nova/ocata
   Status: Incomplete => In Progress

** Changed in: nova/ocata
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** Changed in: nova/ocata
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659467

Title:
  Cannot start instances with neutrons multi-provider network

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Instances utilizing SR-IOV won't be able to start with a network port
  that has been created from a multi-provider (multi-segment) network.
  [1]

  Such ports will have a special "segments" field that will hold a list
  of networks and will fail when we will attempt to retrieve a single
  network, such as here [2]

  
  [1] https://bugs.launchpad.net/openstack-api-site/+bug/1242019
  [2] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1417

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701451] Re: some legacy v2 API lose the protection of json-schema

2017-08-11 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => Alex Xu (xuhj)

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/ocata
   Importance: Undecided => Critical

** Changed in: nova/ocata
   Importance: Critical => Medium

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/ocata
 Assignee: (unassigned) => Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1701451

Title:
  some legacy v2 API lose the protection of json-schema

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  The JSON-Schema support to validate the input for the legacy v2
  compatible mode, and for the legacy v2 request, it won't return 400
  for extra invalid parameters, instead by filter the extra parameters
  out of the input body to protect the API break by the extra
  parameters.

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/evacuate.py#L75

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/migrate_server.py#L66

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/server_groups.py#L166

  Those should be fixed to cover the legacy v2 request, and back-port
  the fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1701451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622292] Re: vf interface isn't assigned the correct mac address

2017-08-11 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: Stephen Finucane (stephenfinucane) => Moshe Levi (moshele)

** Changed in: nova/newton
 Assignee: (unassigned) => Moshe Levi (moshele)

** Changed in: nova/ocata
 Assignee: (unassigned) => Moshe Levi (moshele)

** Changed in: nova/newton
   Status: New => In Progress

** Changed in: nova/ocata
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622292

Title:
  vf interface isn't assigned the correct mac address

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Due to a libvirt change in version libvirt-1.2.18.3-1.fc23.x86_64
  Bug ticket for libvirt:
  https://bugzilla.redhat.com/show_bug.cgi?id=1372944

  the vf interface mac address is not synced with the assigned vf mac address.
  when creating a guest with macvtap-passthrough the guest will not get DHCP 
because the the VF's netdev name is not set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1622292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710250] [NEW] glance v2 has no proper 'copy-from' method

2017-08-11 Thread Abel Lopez
Public bug reported:

If one is following the documented steps, there is no way to import an
image from a remote URL using glance v2, unless one enables deprecated
options.

It's asinine to force users to hairpin an image file - (e.g. download
from remote URL to local FS, then upload to image service)

A workaround https://bugs.launchpad.net/glance/+bug/1595335 notes that
`show_multiple_locations` is required for this to work via Horizon, and
https://bugs.launchpad.net/glance/+bug/1618583 has this option
deprecated.

This not only affects Horizon, as using the CLI commands to set an
image's location causes a 403 error, unless we set
`show_multiple_locations=True`.

There has got to be a way to accomplish this easily as the v1 `--copy-
from` method. It needs to be documented, only documenting examples using
`--file` is a poor assumption of actual use.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1710250

Title:
  glance v2 has no proper 'copy-from' method

Status in Glance:
  New

Bug description:
  If one is following the documented steps, there is no way to import an
  image from a remote URL using glance v2, unless one enables deprecated
  options.

  It's asinine to force users to hairpin an image file - (e.g. download
  from remote URL to local FS, then upload to image service)

  A workaround https://bugs.launchpad.net/glance/+bug/1595335 notes that
  `show_multiple_locations` is required for this to work via Horizon,
  and https://bugs.launchpad.net/glance/+bug/1618583 has this option
  deprecated.

  This not only affects Horizon, as using the CLI commands to set an
  image's location causes a 403 error, unless we set
  `show_multiple_locations=True`.

  There has got to be a way to accomplish this easily as the v1 `--copy-
  from` method. It needs to be documented, only documenting examples
  using `--file` is a poor assumption of actual use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1710250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642679] Re: The OpenStack network_config.json implementation fails on Hyper-V compute nodes

2017-08-11 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova/ocata
 Assignee: (unassigned) => Vladyslav Drok (vdrok)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642679

Title:
  The OpenStack network_config.json implementation fails on Hyper-V
  compute nodes

Status in cloud-init:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  === Begin SRU Template ===
  [Impact]
  When a config drive provides network_data.json on Azure OpenStack,
  cloud-init will fail to configure networking.

  Console log and /var/log/cloud-init.log will show:
   ValueError: Unknown network_data link type: hyperv

  This woudl also occur when the type of the network device as declared
  to cloud-init was 'hw_veb', 'hyperv', 'vhostuser' or 'vrouter'.

  [Test Case]
  Launch an instance with config drive on hyperv cloud.

  [Regression Potential]
  Low to none.   cloud-init is relaxing requirements and will accept things
  now that it previously complained were invalid.
  === End SRU Template ===

  We have discovered an issue when booting Xenial instances on OpenStack
  environments (Liberty or newer) and Hyper-V compute nodes using config
  drive as metadata source.

  When applying the network_config.json, cloud-init fails with this error:
  http://paste.openstack.org/show/RvHZJqn48JBb0TO9QznL/

  The fix would be to add 'hyperv' as a link type here:
  /usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py, line 
587

  Related bugs:
   * bug 1674946: cloud-init fails with "Unknown network_data link type: dvs
   * bug 1642679: OpenStack network_config.json implementation fails on Hyper-V 
compute nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1642679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701451] Re: some legacy v2 API lose the protection of json-schema

2017-08-11 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => In Progress

** Tags added: api

** Changed in: nova/newton
   Importance: Undecided => Medium

** Changed in: nova/newton
 Assignee: (unassigned) => Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1701451

Title:
  some legacy v2 API lose the protection of json-schema

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  The JSON-Schema support to validate the input for the legacy v2
  compatible mode, and for the legacy v2 request, it won't return 400
  for extra invalid parameters, instead by filter the extra parameters
  out of the input body to protect the API break by the extra
  parameters.

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/evacuate.py#L75

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/migrate_server.py#L66

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/server_groups.py#L166

  Those should be fixed to cover the legacy v2 request, and back-port
  the fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1701451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686116] Re: domain xml not well defined when using virtio-scsi disk bus

2017-08-11 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova/ocata
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1686116

Title:
  domain xml not well defined when using virtio-scsi disk bus

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  When using virtio-scsi we should be able to attach up to 256 devices
  but because the XML device definition do not specify which controller
  to use and place on that on that one, we are currently able to attach
  no more than 6 disks.

  step to reproduce the issue:

  - glance image-update --property hw_scsi_model=virtio-scsi " to 
creates the virtio-scsi controller
  - glance image-update --property hw_disk_bus=scsi " disks will be 
using scsi

  Start instance with more than 6 disks/volumes

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1686116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699927] Re: Misuse of assertIsNone in tests

2017-08-11 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova/ocata
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1699927

Title:
  Misuse of assertIsNone in tests

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  There are some misuse of assertIsNone.

  self.assertIsNone(None, )

  This check always passed, so it is useless.

  
https://github.com/openstack/nova/blob/6386692b4b01337569eab4cd6c2f0219d0fe1e74/nova/tests/unit/conductor/test_conductor.py#L1484
  
https://github.com/openstack/nova/blob/6386692b4b01337569eab4cd6c2f0219d0fe1e74/nova/tests/unit/virt/libvirt/test_migration.py#L47
  
https://github.com/openstack/nova/blob/6386692b4b01337569eab4cd6c2f0219d0fe1e74/nova/tests/functional/wsgi/test_servers.py#L74
  
https://github.com/openstack/nova/blob/6386692b4b01337569eab4cd6c2f0219d0fe1e74/nova/tests/functional/wsgi/test_servers.py#L75
  
https://github.com/openstack/nova/blob/6386692b4b01337569eab4cd6c2f0219d0fe1e74/nova/tests/functional/wsgi/test_servers.py#L104
  
https://github.com/openstack/nova/blob/6386692b4b01337569eab4cd6c2f0219d0fe1e74/nova/tests/functional/wsgi/test_servers.py#L105

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1699927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710249] [NEW] nova doesn't clean up the resources after shelve offload

2017-08-11 Thread Balazs Gibizer
Public bug reported:

Scenario:
1) boot a server
2) shelve the server
3) shelve offload the server
4) unshelve the server

The expectation is that after 3) the server has no allocations and after 4) the 
server has a single allocation on a given host it is booted on during 4).
However after 3) placement API still shows allocation for the server. After 4) 
the server has double allocations.

Debug log for the reproduction: https://paste.ubuntu.com/25290864/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: openstack-version.pike placement shelve

** Tags added: openstack-version.pike placement shelve

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1710249

Title:
  nova doesn't clean up the resources after shelve offload

Status in OpenStack Compute (nova):
  New

Bug description:
  Scenario:
  1) boot a server
  2) shelve the server
  3) shelve offload the server
  4) unshelve the server

  The expectation is that after 3) the server has no allocations and after 4) 
the server has a single allocation on a given host it is booted on during 4).
  However after 3) placement API still shows allocation for the server. After 
4) the server has double allocations.

  Debug log for the reproduction: https://paste.ubuntu.com/25290864/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1710249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710114] [NEW] doesn't create new instance on python 3

2017-08-11 Thread Artem Tiumentcev
Public bug reported:

When I tried creating new instance in horizon running on python 3, I
have got error

malformed JSON request: the JSON object must be str, not 'bytes'

Log:

[11/Aug/2017 08:58:05] "POST /api/nova/servers/ HTTP/1.1" 400 66

Screenshots:

http://imgur.com/a/oyiYS
http://imgur.com/a/IhKOA

** Affects: horizon
 Importance: Undecided
 Assignee: Artem Tiumentcev (darland-maik)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1710114

Title:
  doesn't create new instance on python 3

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When I tried creating new instance in horizon running on python 3, I
  have got error

  malformed JSON request: the JSON object must be str, not 'bytes'

  Log:

  [11/Aug/2017 08:58:05] "POST /api/nova/servers/ HTTP/1.1" 400 66

  Screenshots:

  http://imgur.com/a/oyiYS
  http://imgur.com/a/IhKOA

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1710114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710078] [NEW] Fix wrong links

2017-08-11 Thread yfzhao
Public bug reported:

Some docs links have changed. We should update the wrong links in our
codes.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1710078

Title:
  Fix wrong links

Status in Glance:
  New

Bug description:
  Some docs links have changed. We should update the wrong links in our
  codes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1710078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406333] Re: LOG messages localized, shouldn't be

2017-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/492044
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas-dashboard/commit/?id=66c1e460865d64a6ce8ce454a81048e2bdafa9eb
Submitter: Jenkins
Branch:master

commit 66c1e460865d64a6ce8ce454a81048e2bdafa9eb
Author: Akihiro Motoki 
Date:   Wed Aug 9 08:43:49 2017 +

Ensure log messages are not translated

This is VPNaaS dashboard version of the horizon patch
https://review.openstack.org/#/c/455635/.
The horizon patch was merged before VPNaaS dashboard split out,
but unfortunately I failed to import the change.

The following describes the rational of this change
(quoted from the horizon change).

Previously translated messages are included in log messages
and it was determined what language is chosen by users.
It makes difficult for operators to understand log messgaes.

This commit tries to use English messages for all log messages.
The following policies are applied based on the past discussions
in the bug 1406333 and related reviews.

- English messages are used for log messages.
- log messages include exception messages if possible
  to help operators identify what happens.
- Use ID rather than name for log messages
  as ID is much more unique compared to name.
- LOG.debug() in success code path are deleted.
  We don't log success messages in most places and API calls to
  back-end services can be logged from python bindings.

Change-Id: I1a37b7ccbfa29cd83456931f7864ad9ce31aa570
Closes-Bug: #1406333


** Changed in: neutron-vpnaas-dashboard
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1406333

Title:
  LOG messages localized, shouldn't be

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Neutron VPNaaS dashboard:
  Fix Released

Bug description:
  LOG messages should not be localized. There are a few places in
  project/firewalls/forms.py that they are. These instances should be
  removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1406333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710083] [NEW] Allow to set/modify network mtu

2017-08-11 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/483518
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit f21c7e2851bc99b424bdc5322dcd0e3dee7ee5a3
Author: Ihar Hrachyshka 
Date:   Mon Aug 7 10:18:11 2017 -0700

Allow to set/modify network mtu

This patch adds ``net-mtu-writable`` API extension that allows to write
to network ``mtu`` attribute.

The patch also adds support for the extension to ml2, as well as covers
the feature with unit and tempest tests. Agent side implementation of
the feature is moved into a separate patch to ease review.

DocImpact: neutron controller now supports ``net-mtu-writable`` API
   extension.
APIImpact: new ``net-mtu-writable`` API extension was added.

Related-Bug: #1671634
Change-Id: Ib232796562edd8fa69ec06b0cc5cb752c1467add

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1710083

Title:
  Allow to set/modify network mtu

Status in neutron:
  New

Bug description:
  https://review.openstack.org/483518
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit f21c7e2851bc99b424bdc5322dcd0e3dee7ee5a3
  Author: Ihar Hrachyshka 
  Date:   Mon Aug 7 10:18:11 2017 -0700

  Allow to set/modify network mtu
  
  This patch adds ``net-mtu-writable`` API extension that allows to write
  to network ``mtu`` attribute.
  
  The patch also adds support for the extension to ml2, as well as covers
  the feature with unit and tempest tests. Agent side implementation of
  the feature is moved into a separate patch to ease review.
  
  DocImpact: neutron controller now supports ``net-mtu-writable`` API
 extension.
  APIImpact: new ``net-mtu-writable`` API extension was added.
  
  Related-Bug: #1671634
  Change-Id: Ib232796562edd8fa69ec06b0cc5cb752c1467add

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1710083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710083] Re: Allow to set/modify network mtu

2017-08-11 Thread Boden R
We still need to document this extension and its updatable nature in the
api-ref

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
   Status: Fix Released => Confirmed

** Changed in: neutron
   Importance: Undecided => Medium

** Tags added: api-ref lib

** Summary changed:

- Allow to set/modify network mtu
+ [api-ref] Allow to set/modify network mtu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1710083

Title:
  [api-ref] Allow to set/modify network mtu

Status in neutron:
  Confirmed

Bug description:
  https://review.openstack.org/483518
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit f21c7e2851bc99b424bdc5322dcd0e3dee7ee5a3
  Author: Ihar Hrachyshka 
  Date:   Mon Aug 7 10:18:11 2017 -0700

  Allow to set/modify network mtu
  
  This patch adds ``net-mtu-writable`` API extension that allows to write
  to network ``mtu`` attribute.
  
  The patch also adds support for the extension to ml2, as well as covers
  the feature with unit and tempest tests. Agent side implementation of
  the feature is moved into a separate patch to ease review.
  
  DocImpact: neutron controller now supports ``net-mtu-writable`` API
 extension.
  APIImpact: new ``net-mtu-writable`` API extension was added.
  
  Related-Bug: #1671634
  Change-Id: Ib232796562edd8fa69ec06b0cc5cb752c1467add

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1710083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710141] [NEW] Continual warnings in n-cpu logs about being unable to delete inventory for an ironic node with an instance on it

2017-08-11 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/54/487954/12/check/gate-tempest-dsvm-ironic-
ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
nv/041c03a/logs/screen-n-cpu.txt.gz#_Aug_09_19_31_21_450705

Aug 09 19:31:21.450705 ubuntu-xenial-internap-mtl01-10351013 nova-
compute[19132]: WARNING nova.scheduler.client.report [None req-9db22a6d-
e88a-42b0-879e-8fe523dcc664 None None] [req-
2eead243-5e63-4dd0-a208-4ceed95478ff] We cannot delete inventory 'VCPU,
MEMORY_MB, DISK_GB' for resource provider 38b274b2-2e37-4c23-ad6f-
d86c1f0a0e3f because the inventory is in use.

As soon as an ironic node has an instance built on it, the node state is
ACTIVE which means that this method returns True:

https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L176

Saying the node is unavailable, because it's wholly consumed I guess.

That's used here:

https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L324

And that's checked here when reporting inventory to the resource
tracker:

https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L741

Which then tries to delete the inventory for the node resource provider
in placement, which fails because it's already got an instance running
on it that is consuming inventory:

http://logs.openstack.org/54/487954/12/check/gate-tempest-dsvm-ironic-
ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
nv/041c03a/logs/screen-n-cpu.txt.gz#_Aug_09_19_31_21_450705

Aug 09 19:31:21.391146 ubuntu-xenial-internap-mtl01-10351013 
nova-compute[19132]: INFO nova.scheduler.client.report [None 
req-9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] Compute node 
38b274b2-2e37-4c23-ad6f-d86c1f0a0e3f reported no inventory but previous 
inventory was detected. Deleting existing inventory records.
Aug 09 19:31:21.450705 ubuntu-xenial-internap-mtl01-10351013 
nova-compute[19132]: WARNING nova.scheduler.client.report [None 
req-9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] 
[req-2eead243-5e63-4dd0-a208-4ceed95478ff] We cannot delete inventory 'VCPU, 
MEMORY_MB, DISK_GB' for resource provider 38b274b2-2e37-4c23-ad6f-d86c1f0a0e3f 
because the inventory is in use.

This is also bad because if the node was updated with a resource_class,
that resource class won't be automatically created in Placement here:

https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/scheduler/client/report.py#L789

Because the driver didn't report it in the get_inventory method.

And that has an impact on this code to migrate
instance.flavor.extra_specs to have custom resource class overrides from
ironic nodes that now have a resource_class set:

https://review.openstack.org/#/c/487954/

So we've got a bit of a chicken and egg problem here.

Manually testing the ironic flavor migration code hits this problem, as
seen here:

http://paste.openstack.org/show/618160/

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: ironic pike-rc-potential placement

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => High

** Tags added: pike-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1710141

Title:
  Continual warnings in n-cpu logs about being unable to delete
  inventory for an ironic node with an instance on it

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Seen here:

  http://logs.openstack.org/54/487954/12/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
  nv/041c03a/logs/screen-n-cpu.txt.gz#_Aug_09_19_31_21_450705

  Aug 09 19:31:21.450705 ubuntu-xenial-internap-mtl01-10351013 nova-
  compute[19132]: WARNING nova.scheduler.client.report [None req-
  9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] [req-
  2eead243-5e63-4dd0-a208-4ceed95478ff] We cannot delete inventory
  'VCPU, MEMORY_MB, DISK_GB' for resource provider 38b274b2-2e37-4c23
  -ad6f-d86c1f0a0e3f because the inventory is in use.

  As soon as an ironic node has an instance built on it, the node state
  is ACTIVE which means that this method returns True:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L176

  Saying the node is unavailable, because it's wholly consumed I guess.

  That's used here:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L324

  And that's checked here when reporting inventory to the resource
  tracker:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L741

  Which then tries to delete the inventory for the node resource
  provider in placement, which fails because it's already got an
 

[Yahoo-eng-team] [Bug 1710141] Re: Continual warnings in n-cpu logs about being unable to delete inventory for an ironic node with an instance on it

2017-08-11 Thread Matt Riedemann
One question is, why don't we report inventory for an ACTIVE node? If
the inventory is 1 but an instance is also allocating that 1 of whatever
resource class, then isn't that sufficient? In other words, if an
instance is consuming all of the node inventory, that should take the
node out of scheduling decisions for building new instances, which is
also how things work for regular compute nodes for building VMs.

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1710141

Title:
  Continual warnings in n-cpu logs about being unable to delete
  inventory for an ironic node with an instance on it

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  Seen here:

  http://logs.openstack.org/54/487954/12/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
  nv/041c03a/logs/screen-n-cpu.txt.gz#_Aug_09_19_31_21_450705

  Aug 09 19:31:21.450705 ubuntu-xenial-internap-mtl01-10351013 nova-
  compute[19132]: WARNING nova.scheduler.client.report [None req-
  9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] [req-
  2eead243-5e63-4dd0-a208-4ceed95478ff] We cannot delete inventory
  'VCPU, MEMORY_MB, DISK_GB' for resource provider 38b274b2-2e37-4c23
  -ad6f-d86c1f0a0e3f because the inventory is in use.

  As soon as an ironic node has an instance built on it, the node state
  is ACTIVE which means that this method returns True:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L176

  Saying the node is unavailable, because it's wholly consumed I guess.

  That's used here:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L324

  And that's checked here when reporting inventory to the resource
  tracker:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L741

  Which then tries to delete the inventory for the node resource
  provider in placement, which fails because it's already got an
  instance running on it that is consuming inventory:

  http://logs.openstack.org/54/487954/12/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
  nv/041c03a/logs/screen-n-cpu.txt.gz#_Aug_09_19_31_21_450705

  Aug 09 19:31:21.391146 ubuntu-xenial-internap-mtl01-10351013 
nova-compute[19132]: INFO nova.scheduler.client.report [None 
req-9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] Compute node 
38b274b2-2e37-4c23-ad6f-d86c1f0a0e3f reported no inventory but previous 
inventory was detected. Deleting existing inventory records.
  Aug 09 19:31:21.450705 ubuntu-xenial-internap-mtl01-10351013 
nova-compute[19132]: WARNING nova.scheduler.client.report [None 
req-9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] 
[req-2eead243-5e63-4dd0-a208-4ceed95478ff] We cannot delete inventory 'VCPU, 
MEMORY_MB, DISK_GB' for resource provider 38b274b2-2e37-4c23-ad6f-d86c1f0a0e3f 
because the inventory is in use.

  This is also bad because if the node was updated with a
  resource_class, that resource class won't be automatically created in
  Placement here:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/scheduler/client/report.py#L789

  Because the driver didn't report it in the get_inventory method.

  And that has an impact on this code to migrate
  instance.flavor.extra_specs to have custom resource class overrides
  from ironic nodes that now have a resource_class set:

  https://review.openstack.org/#/c/487954/

  So we've got a bit of a chicken and egg problem here.

  Manually testing the ironic flavor migration code hits this problem,
  as seen here:

  http://paste.openstack.org/show/618160/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1710141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710131] [NEW] auto generation of language list does not work expected

2017-08-11 Thread Akihiro Motoki
Public bug reported:

During Pike cycle, we introduced a mechanism to auto-generate the
language list based on PO file availability [1].

However we received a couple of bug reports which seem to be triggered
by this change.

One is "can't change the language in user settings" and only "English"
is available in the user setting menu [2]. Note that all messages are
displayed in a local language so it is just a problem in the generation
logic of the language list.

Another report is Simplified Chinese is not available in the language
list [3].

It looks better to revert the change [3]. It turns out there are several
cases the patch did not assume when it was implemented.

[1] https://review.openstack.org/#/c/450126/
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-i18n/%23openstack-i18n.2017-08-03.log.html#t2017-08-03T14:05:05
[3] http://lists.openstack.org/pipermail/openstack-i18n/2017-August/003017.html

** Affects: horizon
 Importance: Critical
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: pike-rc-potential

** Changed in: horizon
Milestone: None => pike-rc2

** Changed in: horizon
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1710131

Title:
  auto generation of language list does not work expected

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  During Pike cycle, we introduced a mechanism to auto-generate the
  language list based on PO file availability [1].

  However we received a couple of bug reports which seem to be triggered
  by this change.

  One is "can't change the language in user settings" and only "English"
  is available in the user setting menu [2]. Note that all messages are
  displayed in a local language so it is just a problem in the
  generation logic of the language list.

  Another report is Simplified Chinese is not available in the language
  list [3].

  It looks better to revert the change [3]. It turns out there are
  several cases the patch did not assume when it was implemented.

  [1] https://review.openstack.org/#/c/450126/
  [2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-i18n/%23openstack-i18n.2017-08-03.log.html#t2017-08-03T14:05:05
  [3] 
http://lists.openstack.org/pipermail/openstack-i18n/2017-August/003017.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1710131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635554] Re: Delete Router / race condition

2017-08-11 Thread Hua Zhang
** Changed in: neutron
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635554

Title:
  Delete Router /  race condition

Status in neutron:
  Confirmed

Bug description:
  When deleting a router the logfile is filled up.

  
  CentOS7
  Newton(RDO)


  2016-10-21 09:45:02.526 16200 DEBUG neutron.agent.linux.utils [-] Exit code: 
0 execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:140
  2016-10-21 09:45:02.526 16200 WARNING neutron.agent.l3.namespaces [-] 
Namespace qrouter-8cf5-5c5c-461c-84f3-c8abeca8f79a does not exist. Skipping 
delete
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 8cf5-5c5c-461c-84f3-c8abeca8f79a
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 357, in 
_safe_router_removed
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 376, in 
_router_removed
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent ri.delete(self)
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 381, in 
delete
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent 
self.destroy_state_change_monitor(self.process_monitor)
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 325, in 
destroy_state_change_monitor
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent pm = 
self._get_state_change_monitor_process_manager()
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 296, in 
_get_state_change_monitor_process_manager
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent 
default_cmd_callback=self._get_state_change_monitor_callback())
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 299, in 
_get_state_change_monitor_callback
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent ha_device = 
self.get_ha_device_name()
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 137, in 
get_ha_device_name
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent return 
(HA_DEV_PREFIX + self.ha_port['id'])[:self.driver.DEV_NAME_LEN]
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent TypeError: 
'NoneType' object has no attribute '__getitem__'
  2016-10-21 09:45:02.527 16200 ERROR neutron.agent.l3.agent
  2016-10-21 09:45:02.528 16200 DEBUG neutron.agent.l3.agent [-] Finished a 
router update for 8cf5-5c5c-461c-84f3-c8abeca8f79a _process_router_update 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py:504

  
  See full log
  http://paste.openstack.org/show/586656/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656017] Re: nova-manage cell_v2 map_cell0 always returns a non-0 exit code

2017-08-11 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656017

Title:
  nova-manage cell_v2 map_cell0 always returns a non-0 exit code

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  See the discussion in this review:

  https://review.openstack.org/#/c/409890/1/nova/cmd/manage.py@1289

  The map_cell0 CLI is really treated like a function and it's used by
  the simple_cell_setup command. If map_cell0 is used as a standalone
  command it always returns a non-0 exit code because it's returning a
  CellMapping object (or failing with a duplicate entry error if the
  cell0 mapping already exists).

  We should split the main part of the map_cell0 function out into a
  private method and then treat map_cell0 as a normal CLI with integer
  exit codes (0 on success, >0 on failure) and print out whatever
  information is needed when mapping cell0, like the uuid for example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707021] Re: Ironic Flavor Migration can miss instances

2017-08-11 Thread Matt Riedemann
** Changed in: nova
 Assignee: Sylvain Bauza (sylvain-bauza) => Ed Leafe (ed-leafe)

** Also affects: nova/pike
   Importance: Medium
 Assignee: Ed Leafe (ed-leafe)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707021

Title:
  Ironic Flavor Migration can miss instances

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  The recently merged patch (https://review.openstack.org/484949) runs
  the migration in the driver's init_host() method. It has since been
  pointed out that if there are multiple computes handling ironic, some
  instances can get skipped if the hash ring is refreshed and an
  unmigrated instance is placed on a compute whose init_host has already
  run. While a rare thing, this should be corrected to ensure that all
  instances have their flavors migrated in Pike.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643301] Re: bootstrapping keystone failed when LDAP backend is in use

2017-08-11 Thread Lance Bragstad
This was discussed with Colleen and Kristi in IRC [0]. The following was
proposed

 - write a patch so that devstack always configures sql as the identity backend
 - when ldap is set as KEYSTONE_IDENTITY_BACKEND, ensure it's done in a 
domain-specific way
 - write a patch so keystone fails gracefully with an informative warning 
saying `bootstrap` is only intended for sql-based deployments

Thoughts on the approach?


[0] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-08-11.log.html#t2017-08-11T20:45:37

** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1643301

Title:
  bootstrapping keystone failed when LDAP backend is in use

Status in devstack:
  In Progress
Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  "keystone-manage bootstrap" command is coded for SQL backend, it's
  should be okay if admin token is always supported by keystone, but we
  have a plan to remove the support of admin token since it's expose a
  security risk. And the patch to remove the support of write operation
  for LDAP backend is on the fly.

  Based on the above consideration, we should enable the bootrapping
  keystone when using LDAP backend, but it currently not work sometimes,
  for example.

  
  # keystone-manage bootstrap --bootstrap-username Dave --bootstrap-password 
abc123 --bootstrap-project-name admin --bootstrap-role-name admin


2016-10-27 16:26:29.845 11359 TRACE keystone return 
self.result(msgid,all=1,timeout=self.timeout)
2016-10-27 16:26:29.845 11359 TRACE keystone   File 
"/usr/local/lib/python2.7/dist-packages/ldap/ldapobject.py", line 497, in result
2016-10-27 16:26:29.845 11359 TRACE keystone resp_type, resp_data, 
resp_msgid = self.result2(msgid,all,timeout)
2016-10-27 16:26:29.845 11359 TRACE keystone   File 
"/usr/local/lib/python2.7/dist-packages/ldap/ldapobject.py", line 501, in 
result2
2016-10-27 16:26:29.845 11359 TRACE keystone resp_type, resp_data, 
resp_msgid, resp_ctrls = self.result3(msgid,all,timeout)
2016-10-27 16:26:29.845 11359 TRACE keystone   File 
"/usr/local/lib/python2.7/dist-packages/ldap/ldapobject.py", line 508, in 
result3
2016-10-27 16:26:29.845 11359 TRACE keystone 
resp_ctrl_classes=resp_ctrl_classes
2016-10-27 16:26:29.845 11359 TRACE keystone   File 
"/usr/local/lib/python2.7/dist-packages/ldap/ldapobject.py", line 515, in 
result4
2016-10-27 16:26:29.845 11359 TRACE keystone ldap_result = 
self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)
2016-10-27 16:26:29.845 11359 TRACE keystone   File 
"/usr/local/lib/python2.7/dist-packages/ldap/ldapobject.py", line 106, in 
_ldap_call
2016-10-27 16:26:29.845 11359 TRACE keystone result = 
func(*args,**kwargs)
2016-10-27 16:26:29.845 11359 TRACE keystone UNDEFINED_TYPE: {'info': 
'enabled: attribute type undefined', 'desc': 'Undefined attribute type'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1643301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662626] Re: live-migrate left in migrating as domain not found

2017-08-11 Thread Matt Riedemann
** Changed in: nova/ocata
   Importance: Undecided => Medium

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
 Assignee: (unassigned) => Shane Peters (shaner)

** Changed in: nova/newton
   Status: New => In Progress

** Changed in: nova/newton
   Importance: Undecided => Medium

** Tags removed: ocata-backport-potential
** Tags added: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662626

Title:
  live-migrate left in migrating as domain not found

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Fix Committed

Bug description:
  A live-migration stress test was working fine when suddenly a VM
  stopped migrating. It failed with this error:

  ERROR nova.virt.libvirt.driver [req-df91ac40-820f-4aa9-945b-
  b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c
  669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-
  9c2c-8a792aed3d6b] Error from libvirt during undefine. Code=42
  Error=Domain not found: no domain with matching uuid '62034d78-3144
  -4efd-9c2c-8a792aed3d6b' (instance-0431)

  The full stack trace:

  2017-02-05 02:33:41.787 19770 INFO nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Migration running for 240 secs, memory 9% 
remaining; (bytes processed=15198240264, remaining=1680875520, 
total=17314955264)
  2017-02-05 02:33:45.795 19770 INFO nova.compute.manager 
[req-abff9c69-5f82-4ed6-af8a-fd1dc81a72a6 - - - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] VM Paused (Lifecycle Event)
  2017-02-05 02:33:45.870 19770 INFO nova.compute.manager 
[req-abff9c69-5f82-4ed6-af8a-fd1dc81a72a6 - - - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] During sync_power_state the instance has 
a pending task (migrating). Skip.
  2017-02-05 02:33:45.883 19770 INFO nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Migration operation has completed
  2017-02-05 02:33:45.884 19770 INFO nova.compute.manager 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] _post_live_migration() is started..
  2017-02-05 02:33:46.156 19770 INFO os_vif 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] Successfully unplugged vif 
VIFBridge(active=True,address=fa:16:3e:a2:90:55,bridge_name='brq476ab6ba-b3',has_traffic_filtering=True,id=98d476b3-0ead-4adb-ad54-1dff63edcd65,network=Network(476ab6ba-b32e-409e-9711-9412e8475ea0),plugin='linux_bridge',port_profile=,preserve_on_delete=True,vif_name='tap98d476b3-0e')
  2017-02-05 02:33:46.189 19770 INFO nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Deleting instance files 
/var/lib/nova/instances/62034d78-3144-4efd-9c2c-8a792aed3d6b_del
  2017-02-05 02:33:46.195 19770 INFO nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Deletion of 
/var/lib/nova/instances/62034d78-3144-4efd-9c2c-8a792aed3d6b_del complete

  2017-02-05 02:33:46.334 19770 ERROR nova.virt.libvirt.driver [req-
  df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c
  669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-
  9c2c-8a792aed3d6b] Error from libvirt during undefine. Code=42
  Error=Domain not found: no domain with matching uuid '62034d78-3144
  -4efd-9c2c-8a792aed3d6b' (instance-0431)

  2017-02-05 02:33:46.363 19770 WARNING nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Error monitoring migration: Domain not 
found: no domain with matching uuid '62034d78-3144-4efd-9c2c-8a792aed3d6b' 
(instance-0431)
  2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Traceback (most recent call last):
  2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b]   File 
"/openstack/venvs/nova-14.0.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 6345, in _live_migration
  2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver 

[Yahoo-eng-team] [Bug 1693315] Re: Unhelpful invalid bdm error in compute logs when volume creation fails during boot from volume

2017-08-11 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1693315

Title:
  Unhelpful invalid bdm error in compute logs when volume creation fails
  during boot from volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  This came up in IRC while debugging a separate problem with a user.

  They are booting from volume where nova creates the volume, and were
  getting this unhelpful error message in the end:

  BuildAbortException: Build of instance
  9484f5a7-3198-47ff-b728-178515a26277 aborted: Block Device Mapping is
  Invalid.

  That's from this generic exception that is raised up:

  
https://github.com/openstack/nova/blob/81bdbd0b50aeac9a677a0cef9001081008a2c407/nova/compute/manager.py#L1595

  The actual exception in the traceback is much more specific:

  http://paste.as47869.net/p/9qbburh7z3w3toi

  2017-05-24 16:33:26.127 2331 ERROR nova.compute.manager [instance:
  9484f5a7-3198-47ff-b728-178515a26277] VolumeNotCreated: Volume
  da947c97-66c6-4b7e-9ae6-54eb8128bb75 did not finish being created even
  after we waited 3 seconds or 2 attempts. And its status is error.

  That's showing that the volume failed to be created almost
  immediately.

  It would be better to include that error message in what goes into the
  BuildAbortException which is what ultimately goes into the recorded
  instance fault:

  
https://github.com/openstack/nova/blob/81bdbd0b50aeac9a677a0cef9001081008a2c407/nova/compute/manager.py#L1878

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1693315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684861] Re: Mitaka -> Newton: Database online_data_migrations in newton fail due to missing keypairs

2017-08-11 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Dan Smith (danms)

** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova/newton
   Status: New => In Progress

** Changed in: nova/ocata
   Status: New => Fix Released

** Changed in: nova/ocata
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** Changed in: nova/ocata
   Importance: Undecided => Low

** Changed in: nova/newton
   Importance: Undecided => Low

** Changed in: nova
   Importance: Low => Medium

** Changed in: nova/ocata
   Importance: Low => Medium

** Changed in: nova/newton
   Importance: Low => Medium

** Changed in: nova/newton
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1684861

Title:
  Mitaka -> Newton: Database online_data_migrations in newton fail due
  to missing keypairs

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Fix Released

Bug description:
  Upgrading the deployment from Mitaka to Newton.
  This bug blocks people from upgrading to Ocata because the database migration 
for nova fails.

  Running nova newton 14.0.5, the database is 334

  root@moby:/backups# nova-manage db online_data_migrations
  Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value 
may be silently ignored in the future.
  Running batches of 50 until complete
  50 rows matched query migrate_flavors, 50 migrated
  20 rows matched query migrate_flavors, 20 migrated
  Error attempting to run 
  30 rows matched query migrate_instances_add_request_spec, 30 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 50 migrated
  Error attempting to run 
  /usr/lib/python2.7/dist-packages/pkg_resources/__init__.py:188: 
RuntimeWarning: You have iterated over the result of 
pkg_resources.parse_version. This is a legacy behavior which is inconsistent 
with the new version class introduced in setuptools 8.0. In most cases, 
conversion to a tuple is unnecessary. For comparison of versions, sort the 
Version instances directly. If you have another use case requiring the tuple, 
please file a bug with the setuptools project describing that need.
stacklevel=1,
  50 rows matched query migrate_instances_add_request_spec, 5 migrated
  2017-04-20 14:48:36.586 396 ERROR nova.objects.keypair 
[req-565cbe62-030b-4b00-b9db-5ee82117889b - - - - -] Some instances are still 
missing keypair information. Unable to run keypair migration at this time.
  5 rows matched query migrate_aggregates, 5 migrated
  5 rows matched query migrate_instance_groups_to_api_db, 5 migrated
  2 rows matched query delete_build_requests_with_no_instance_uuid, 2 migrated
  Error attempting to run 
  50 rows matched query migrate_instances_add_request_spec, 0 migrated
  2017-04-20 14:48:40.620 396 ERROR nova.objects.keypair 
[req-565cbe62-030b-4b00-b9db-5ee82117889b - - - - -] Some instances are still 
missing keypair information. Unable to run keypair migration at this time.
  root@moby:/backups#

  Adding a 'raise' after 
https://github.com/openstack/nova/blob/stable/newton/nova/cmd/manage.py#L896
  you can see:

  root@moby:/backups# nova-manage db online_data_migrations
  Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value 
may be silently ignored in the future.
  Running batches of 50 until complete
  Error attempting to run 
  error: 'NoneType' object has no attribute 'key_name'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1684861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1708316] Re: Nova send wrong information when there are several networks which have same name and VM uses more than one of them

2017-08-11 Thread Kevin Benton
** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708316

Title:
  Nova send wrong information when there are several networks which have
  same name and VM uses more than one of them

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  Nova send wrong information when there are
  several networks which have same name and
  VM uses more than one of them

  Steps to reproduce
  ==
  1. Create two networks which have same name
  2. Create VM with the networks created in 1st step.
  3. Check the VM using "nova show "

  Expected result
  ===
  ...
  | tenant_id| 92f3ea23c5b84fd69b56583f322d213e 
|
  | testnet1 network | 192.168.0.12 
|
  | testnet1 network | 192.168.1.4  
|
  | updated  | 2017-07-31T14:33:49Z 
|
  ...

  Actual result
  =
  ...
  | tenant_id| 92f3ea23c5b84fd69b56583f322d213e 
|
  | testnet1 network | 192.168.0.12, 192.168.1.4
|
  | updated  | 2017-07-31T14:33:49Z 
|
  ...

  Environment
  ===
  1. Openstack Version : I tested this using Mitaka & Ocata
  2. Network : Neutron with LinuxBridge

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658734] Re: DNS queries of does-not-exist.example.com and example.invalid

2017-08-11 Thread Ryan Harper
** Bug watch added: Red Hat Bugzilla #1468192
   https://bugzilla.redhat.com/show_bug.cgi?id=1468192

** Also affects: cloud-init (CentOS) via
   https://bugzilla.redhat.com/show_bug.cgi?id=1468192
   Importance: Unknown
   Status: Unknown

** Merge proposal linked:
   https://code.launchpad.net/~rmccabe/cloud-init/+git/cloud-init/+merge/328877

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1658734

Title:
  DNS queries of does-not-exist.example.com and example.invalid

Status in cloud-init:
  Confirmed
Status in cloud-init package in CentOS:
  Unknown

Bug description:
  cloud-init makes several DNS queries for does-not-exist.example.com
  and example.invalid (and also some random names).
  https://git.launchpad.net/cloud-init/tree/cloudinit/util.py#n1100

  We understand that it does this to detect the kind of DNS redirection
  that's done an many universities, some ISPs, and services like OpenDNS
  (when used for filtering or typo correction).

  However, it can be problematic in an environment where an intrusion
  detection system might flag these queries as potentially malicious,
  and in a system where DNS redirection is not used it unnecessarily
  increases boot time.

  It looks like the feature was written to make it possible to disable
  it or provide specific redirection IPs, but that it never gained a
  config option to control it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1658734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708133] Re: when showing quota-usage details, return NULL

2017-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/48
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9e185bca08510ec831afdd75013bb3655d9a6e76
Submitter: Jenkins
Branch:master

commit 9e185bca08510ec831afdd75013bb3655d9a6e76
Author: Zhengwei Gao 
Date:   Wed Aug 2 20:05:20 2017 +0800

Allow unprivileged users to get their quota usage

When user try to get quota usage detail for their own project, it will
return null. As process logic of details method is incorrect, it only
allow admin to get other's project quota usage detail.

Change-Id: I2e21dac497a6c5bffba6b55cb4456820900449df
Closes-Bug: #1708133


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708133

Title:
  when showing  quota-usage details, return NULL

Status in neutron:
  Fix Released

Bug description:
  In case of: User cann't get quota usage detail for their own project,
  when try to get quota usage detail for their own project, it will
  ruturn null. however, it only allow admin to get other's user project
  quota usage detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710203] [NEW] Add port dns_domain processing logic

2017-08-11 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/468697
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 4a7753325999ef1e5c77f6131cfe03b2cfa364a7
Author: Miguel Lavalle 
Date:   Sat May 27 18:27:34 2017 -0500

Add port dns_domain processing logic

This patchset adds logic to the ML2 DNS integration extension to process
a new dns_domain attribute associated to ports.

This patchset belongs to a series that adds dns_domain attribute
functionality to ports.

DocImpact: Ports have a new dns_domain attribute, that takes precedence
   over networks dns_domain when published to an external DNS
   service.

APIImpact: Users can now specify a dns_domain attribute in port POST and
   PUT operations.

Change-Id: I02d8587d3a1f9f3f6b8cbc79dbe8df4b4b99a893
Partial-Bug: #1650678

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1710203

Title:
  Add port dns_domain processing logic

Status in neutron:
  New

Bug description:
  https://review.openstack.org/468697
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4a7753325999ef1e5c77f6131cfe03b2cfa364a7
  Author: Miguel Lavalle 
  Date:   Sat May 27 18:27:34 2017 -0500

  Add port dns_domain processing logic
  
  This patchset adds logic to the ML2 DNS integration extension to process
  a new dns_domain attribute associated to ports.
  
  This patchset belongs to a series that adds dns_domain attribute
  functionality to ports.
  
  DocImpact: Ports have a new dns_domain attribute, that takes precedence
 over networks dns_domain when published to an external DNS
 service.
  
  APIImpact: Users can now specify a dns_domain attribute in port POST and
 PUT operations.
  
  Change-Id: I02d8587d3a1f9f3f6b8cbc79dbe8df4b4b99a893
  Partial-Bug: #1650678

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1710203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707339] Re: test_trunk_subport_lifecycle fails on subport down timeout

2017-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/488914
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1d80c960f61248f11af427d6008faac7fcc2db0b
Submitter: Jenkins
Branch:master

commit 1d80c960f61248f11af427d6008faac7fcc2db0b
Author: Jakub Libosvar 
Date:   Tue Aug 1 17:04:30 2017 +

ovs-fw: Handle only known trusted ports

Similarly to filtered ports this patch caches so called trusted ports to
avoid processing in case of unknown port is passed down to firewall
driver. The cached ofport is used for removal as the cache reflects
currently installed flows.

The patch also catches exception caused by inconsistency coming from
ovsdb.

Closes-bug: #1707339

Change-Id: I15cdb28072835fcb8c37ae4b56fc8754375a807c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707339

Title:
  test_trunk_subport_lifecycle fails on subport down timeout

Status in neutron:
  Fix Released

Bug description:
  subport doesn't transition to DOWN state after trunk deletion

  
  logstash query: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3Agate-tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-nv%20AND%20build_branch%3Amaster%20AND%20message%3A%5C%22Timed%20out%20waiting%20for%20subport%20%5C%22%20AND%20message%3A%5C%22to%20transition%20to%20DOWN%5C%22%20AND%20tags%3Aconsole

  A failed run:

  http://logs.openstack.org/76/467976/22/check/gate-tempest-dsvm-
  neutron-dvr-multinode-scenario-ubuntu-xenial-
  nv/e11aeaf/logs/testr_results.html.gz

  The agent log is filled with trace:

  Error while processing VIF ports: OVSFWPortNotFound: Port
  526e3ca9-9af3-4b94-8550-90a5bdc9b4e7 is not managed by this agent.

  http://logs.openstack.org/76/467976/22/check/gate-tempest-dsvm-
  neutron-dvr-multinode-scenario-ubuntu-xenial-
  nv/e11aeaf/logs/screen-q-agt.txt.gz?level=TRACE

  Incidentally, this port is a parent of a trunk port.

  Now the trunk's OVSDB handler on the agent side relies on the OVS
  agent main loop to detect the port removal and let it notify the
  server to mark the logical port down. I wonder if these exceptions
  prevent that from happening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp