[Yahoo-eng-team] [Bug 1761074] [NEW] delete-modal passes only ID to callback

2018-04-03 Thread Shu Muto
Public bug reported:

Angular-based dialog for confirming deletion passes only ID to callback
in deletion action service of each panel.

If the API does not accept ID on deletion or other operations, e.g.
quota management on magnum API, the panel can not implement deletion
action.

To enable to pass whole entity of selected item, we can pass it as
second parameter for callback.

** Affects: horizon
 Importance: Undecided
 Assignee: Shu Muto (shu-mutou)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1761074

Title:
  delete-modal passes only ID to callback

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Angular-based dialog for confirming deletion passes only ID to
  callback in deletion action service of each panel.

  If the API does not accept ID on deletion or other operations, e.g.
  quota management on magnum API, the panel can not implement deletion
  action.

  To enable to pass whole entity of selected item, we can pass it as
  second parameter for callback.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1761074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1761070] [NEW] iptables rules for linuxbridge ignore bridge_mappings

2018-04-03 Thread Sam Morrison
Public bug reported:

We have bridge_mappings set for linuxbridge agent to use a non standard
bridge naming convention.

This works in all places apart from the setting zone rules in iptables.

The code in neutron/agent/linux/iptables_firewall.py doesn't take into
account mappings and just uses the default bridge name which is derived
from the network ID.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1761070

Title:
  iptables rules for linuxbridge ignore bridge_mappings

Status in neutron:
  New

Bug description:
  We have bridge_mappings set for linuxbridge agent to use a non
  standard bridge naming convention.

  This works in all places apart from the setting zone rules in
  iptables.

  The code in neutron/agent/linux/iptables_firewall.py doesn't take into
  account mappings and just uses the default bridge name which is
  derived from the network ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1761070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1761062] [NEW] clean source instance directory failed in _cleanup_resize when images_type is rbd

2018-04-03 Thread guolidong
Public bug reported:

Description
===
When images_type is rbd, and boot an instance from image, perform resize and 
resize-confirm of this instance, it will not clean up source instance directory 
and result in live-migration this instance failed. The following is the error 
log. 

2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
[req-e20743e3-a683-41ba-b47b-ba92e97eff37 4f7cd8bf676d43bc9faf09b2eb41482f 
2c3d8251c39545cbb6f77f331b7164f8 - default default] Exception during message 
handling: DestinationDiskExists: The supplied disk path 
(/var/lib/nova/instances/d8db3f2a-cd8f-48e1-9951-012d762664b2) already exists, 
it is expected not to exist.
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, 
in dispatch
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _do_dispatch
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 880, in 
decorated_function
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 223, in 
decorated_function
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 211, in 
decorated_function
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5615, in 
pre_live_migration
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server migrate_data)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova_patch/virt/libvirt/driver.py", line 
1095, in wrap
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
migrate_data=migrate_data)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7081, in 
pre_live_migration
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server raise 
exception.DestinationDiskExists(path=instance_dir)
2018-03-22 10:05:40.657 20779 ERROR oslo_messaging.rpc.server 
DestinationDiskExists: The supplied disk path 
(/var/lib/nova/instances/d8db3f2a-cd8f-48e1-9951-012d762664b2) already exists, 
it is expected not to exist.
2018-03-22 10:05:40.657 20779 ERROR oslo_mess

[Yahoo-eng-team] [Bug 1761054] [NEW] nova log expose password when swapvolume

2018-04-03 Thread jichenjc
Public bug reported:

http://logs.openstack.org/50/557150/6/check/tempest-
full/1f9c9f2/controller/logs/screen-n-cpu.txt.gz#_Mar_30_08_37_13_371323

u'auth_password': u'8KigD3KKykJkJixs', u'auth_username':
u'6m4wAHCZVqFfTQaF4eZu',

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New


** Tags: security

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

** Tags added: security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1761054

Title:
  nova log expose password when swapvolume

Status in OpenStack Compute (nova):
  New

Bug description:
  http://logs.openstack.org/50/557150/6/check/tempest-
  full/1f9c9f2/controller/logs/screen-n-cpu.txt.gz#_Mar_30_08_37_13_371323

  u'auth_password': u'8KigD3KKykJkJixs', u'auth_username':
  u'6m4wAHCZVqFfTQaF4eZu',

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1761054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758952] Re: DHCP agent fails to configure routed networks due to failure generating subnet options

2018-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/556584
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bb5138cff4e6c383d4d9e902423072a1493832b9
Submitter: Zuul
Branch:master

commit bb5138cff4e6c383d4d9e902423072a1493832b9
Author: Miguel Lavalle 
Date:   Mon Mar 26 11:20:18 2018 -0500

Fix DHCP isolated subnets with routed networks

After the merge of [1], the DHCP agent will fail to configure a routed
network when attempting to generate host routes for a non-local subnet
that is also isolated from the point of view of the metadata service.
This patch adds a check to make sure that the host route is not added
for non-local subnets.

[1] https://review.openstack.org/#/c/468744

Change-Id: Ia03f538a7d2d10d600d9359da5b3a74532709d1f
Closes-Bug: #1758952


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1758952

Title:
  DHCP agent fails to configure routed networks due to failure
  generating subnet options

Status in neutron:
  Fix Released

Bug description:
  In a routed networks environment, after the merge of
  https://review.openstack.org/#/c/468744, the dhcp agent attempts to
  generate subnet options for both local and non local subnets. If one
  of the non-local subnets is classified as isolated from the point of
  view of the metadata service, the following traceback occurs:

  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent [-] Unable to enable dhcp for 
40dcd8c6-e4a1-4510-ac62-f37c11e65a5a.: KeyError: 
u'2cbde018-1c03-40f8-9d67-90a03c7ebd37'
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent Traceback (most recent call last):
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 144, in call_driver
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent getattr(driver, action)(**action_kwargs)
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 219, in enable
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent self.spawn_process()
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 446, in spawn_process
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 455, in 
_spawn_or_reload_process
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent self._output_config_files()
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 499, in 
_output_config_files
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent self._output_opts_file()
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 872, in _output_opts_file
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent options, subnet_index_map = 
self._generate_opts_per_subnet()
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 933, in 
_generate_opts_per_subnet
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent subnet_dhcp_ip and subnet.ip_version == 4):
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent KeyError: u'2cbde018-1c03-40f8-9d67-90a03c7ebd37'
  Mar 26 15:21:41 compute2 neutron-dhcp-agent[31181]: ERROR 
neutron.agent.dhcp.agent 

  This traceback is due to the fact that the non-local subnet in
  question has no interface in the DHCP agent namespace, so it is not
  found when:

  subnet_dhcp_ip = subnet_to_interface_ip[subnet.id]

  is executed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1758952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751472] Re: InventoryInUse exception is periodically logged as ERROR

2018-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/553367
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ac20fc22adb133d9de5f2ec15faad11b2de1987a
Submitter: Zuul
Branch:master

commit ac20fc22adb133d9de5f2ec15faad11b2de1987a
Author: Hironori Shiina 
Date:   Thu Mar 15 20:44:50 2018 +0900

ironic: Get correct inventory for deployed node

_node_resources_unavailable() is supposed to be called after
_node_resources_used() returns False. Because get_inventory() doesn't
satisfy this condition, this method returns an empty inventory for a
deployed bare metal node. It causes the resource tracker to try
removing an allocated inventory from placement. This removal results
in periodic unexpected errors.

This patch calls _node_resources_used() prior to
_node_resources_unanvailable() for getting a proper inventory.

Change-Id: I6717ce19f6005c8ebb7af75437a72876c5a53f34
Closes-Bug: 1751472


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1751472

Title:
  InventoryInUse exception is periodically logged as ERROR

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  After a bare metal instance creation is started, an InventoryInUse
  exception is logged as ERROR with stack trace in the log of n-cpu.

  Ironic virt driver returns an empty inventory for a node which is
  allocated[1]. Due to this, the resource tracker tried to delete this
  inventory, then it causes a conflict error because the resource
  provider for the ironic node is allocated. A warning message used to
  be logged for this conflict error[2]. After the recent change[3], an
  InventoryInUse exception is raised[4]. Then, this exception is logged
  as ERROR[5].

  [1] 
https://github.com/openstack/nova/blob/26c593c91f008caab92ed52156dfe2d898955d3f/nova/virt/ironic/driver.py#L780
  [2] 
https://github.com/openstack/nova/commit/26c593c91f008caab92ed52156dfe2d898955d3f#diff-94f87e728df6465becce5241f3da53c8L994
  [3] 
https://github.com/openstack/nova/commit/26c593c91f008caab92ed52156dfe2d898955d3f#diff-94f87e728df6465becce5241f3da53c8
  [4] 
https://github.com/openstack/nova/blob/26c593c91f008caab92ed52156dfe2d898955d3f/nova/scheduler/client/report.py#L878
  [5] 
https://github.com/openstack/nova/blob/26c593c91f008caab92ed52156dfe2d898955d3f/nova/compute/manager.py#L7244

  -
  The following log is from an ironic job[6].

  [6] http://logs.openstack.org/19/546919/2/check/ironic-tempest-dsvm-
  ipa-partition-pxe_ipmitool-tinyipa-
  python3/2737ab0/logs/screen-n-cpu.txt.gz?level=DEBUG#_Feb_22_11_13_08_848696

  Feb 22 11:13:08.848696 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG nova.virt.ironic.driver [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 
service nova] Node 42ae69bd-c860-4eaa-8a36-fdee78425714 is not ready for a 
deployment, reporting an empty inventory for it. Node's provision state is 
deploying, power state is power off and maintenance is False. {{(pid=14029) 
get_inventory /opt/stack/new/nova/nova/virt/ironic/driver.py:752}}
  Feb 22 11:13:08.956620 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG nova.scheduler.client.report [None 
req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Refreshing aggregate 
associations for resource provider fdd77c1d-5b1f-4a9a-b168-0fa93362b95d, 
aggregates: None {{(pid=14029) _refresh_associations 
/opt/stack/new/nova/nova/scheduler/client/report.py:773}}
  Feb 22 11:13:08.977097 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG nova.virt.ironic.driver [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 
service nova] The flavor extra_specs for Ironic instance 
803d864c-542e-4bb4-a89a-38da01cb8409 have been updated for custom resource 
class 'baremetal'. {{(pid=14029) _pike_flavor_migration 
/opt/stack/new/nova/nova/virt/ironic/driver.py:545}}
  Feb 22 11:13:08.994233 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG nova.scheduler.client.report [None 
req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] Refreshing trait 
associations for resource provider fdd77c1d-5b1f-4a9a-b168-0fa93362b95d, 
traits: None {{(pid=14029) _refresh_associations 
/opt/stack/new/nova/nova/scheduler/client/report.py:784}}
  Feb 22 11:13:09.058940 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
INFO nova.scheduler.client.report [None 
req-10cd394d-b1be-4541-85ed-ff2275343fb5 service nova] 
[req-55086c0b-9068-49fb-ae94-cd870ab96cab] Inventory update conflict for 
fdd77c1d-5b1f-4a9a-b168-0fa93362b95d with generation ID 2
  Feb 22 11:13:09.059437 ubuntu-xenial-ovh-bhs1-0002670096 nova-compute[14029]: 
DEBUG oslo_concurrency.lockutils [None req-10cd394d-b1be-4541-85ed-ff2275343fb5 
service nova] Lock "compute_resources" released by 
"nova.compute.resource_tracker.ResourceTracker._update_available_resour

[Yahoo-eng-team] [Bug 1761036] [NEW] Navigation on ngdetails is not reproduced properly when specified navigation is not exist

2018-04-03 Thread Shu Muto
Public bug reported:

This issue is caused by https://review.openstack.org/#/c/491346/ that
fixed the bug https://bugs.launchpad.net/horizon/+bug/1746706/ , when
the navigation does not have specified panel originally.

e.g. when the non-admin user who does not have menu for "Admin" dashboard 
specifies "?nav=/admin/images/" at the end of URL, like 
http://host.domain/ngdetails/OS::Glance::Image/ce46ef50-850a-46ed-8707-bd5b21e4f9b7?nav=%2Fadmin%2Fimages%2F
 .
It means to reproduce navigation for the ngdetails view as on "Admin" dashboard.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1761036

Title:
  Navigation on ngdetails is not reproduced properly when specified
  navigation is not exist

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This issue is caused by https://review.openstack.org/#/c/491346/ that
  fixed the bug https://bugs.launchpad.net/horizon/+bug/1746706/ , when
  the navigation does not have specified panel originally.

  e.g. when the non-admin user who does not have menu for "Admin" dashboard 
specifies "?nav=/admin/images/" at the end of URL, like 
http://host.domain/ngdetails/OS::Glance::Image/ce46ef50-850a-46ed-8707-bd5b21e4f9b7?nav=%2Fadmin%2Fimages%2F
 .
  It means to reproduce navigation for the ngdetails view as on "Admin" 
dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1761036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2018-04-03 Thread Ben Nemec
** Changed in: oslo.middleware
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  New
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in masakari:
  Fix Released
Status in Mistral:
  Invalid
Status in Monasca:
  New
Status in networking-arista:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  Fix Committed
Status in networking-ofagent:
  Fix Committed
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in Rally:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Committed
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592043] Re: os-brick 1.4.0 increases volume setup failure rates

2018-04-03 Thread Ben Nemec
The oslo.privsep part of this bug was fixed in
https://review.openstack.org/#/c/329766/

I'm not sure why that didn't show up as it does appear to have a bug
reference.

** Changed in: oslo.privsep
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592043

Title:
  os-brick 1.4.0 increases volume setup failure rates

Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Invalid
Status in oslo.privsep:
  Fix Released

Bug description:
  Since merging upper constraints 1.4.0 into upper-constraints, the
  multinode grenade jobs are hitting a nearly 1/3 failure rate on boot
  from volume scenarios around volume setup. This would be on Newton
  code using Mitaka configs.

  Representative failures are of the following form:
  http://logs.openstack.org/71/327971/5/gate/gate-grenade-dsvm-neutron-
  
multinode/f2690e3/logs/new/screen-n-cpu.txt.gz?level=WARNING#_2016-06-13_15_22_59_095

  The 1/3 failure rate is suspicious, and in the past has often hinted
  towards a race condition interacting between parallel API requests.

  The failure rate increase can be seen here -
  http://tinyurl.com/zrq35e8

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1592043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582807] Re: neutron-rootwrap-daemon explodes because of misbehaving "ip rule"

2018-04-03 Thread Ben Nemec
** Changed in: oslo.rootwrap
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582807

Title:
  neutron-rootwrap-daemon explodes because of misbehaving "ip rule"

Status in neutron:
  Invalid
Status in oslo.rootwrap:
  Invalid

Bug description:
  Somehow, one of my deployments (ubuntu trusty64 based)

  root@devstack:~# uname -a
  Linux devstack 3.19.0-37-generic #42~14.04.1-Ubuntu SMP Mon Nov 23 15:13:51 
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

  results on never-ending output of "ip rule"

  # ip rule
  0:from all lookup local
  0:from all lookup local
  0:from all lookup local
  0:from all lookup local
  [...]
  runs for ever... (apparently because of a kernel bug)

  As a result of that, when neutron-l3-agent calls "ip rule" via
  rootwrap-deamon, the daemon eventually takes all the VM memory, and
  explodes.

  I believe we should include some sort of limit on the rootwrap-daemon
  command response collection to avoid crashing the system on conditions
  like this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760894] Re: very long rpc_loop in neutron openvswitch agent

2018-04-03 Thread Brian Haley
*** This bug is a duplicate of bug 1745468 ***
https://bugs.launchpad.net/bugs/1745468

We actually just fixed the conntrack issue in master and it's being
backported to stable/queens.  I'll link that bug and close this as a
duplicate.

** This bug has been marked a duplicate of bug 1745468
   Conntrack entry removal can take a long time on large deployments

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1760894

Title:
  very long rpc_loop in neutron openvswitch agent

Status in neutron:
  New

Bug description:
  release: Pike

  ML2 with DVR, L3_HA and L2 population

  When I removed 100 VMs and then spawned new 100 VMs, a dozen of new
  VMs ended in ERROR state because Nova gave up waiting for a "VIF
  plugged in" event from Neutron.

  I figured out that just before spawning new 100 VMs, in neutron
  openvswitch agent the rpc_loop started new iteration to remove a dozen
  ports which were used by old (just removed) VMs.

  The rpc_loop iteration took 465 seconds (almost 8 minutes) and during
  this time, Nova timed out waiting 300 seconds for "VIF plugged in"
  event during spawning new VMs.

  Looks like the most of the time was spend running hundreds of
  conntrack entries deleting commands.

  You will find the rpc_loop DEBUG log in the attachment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1760894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1759306] Re: Scheduler host manager does not filter out cell0 from the list of cell candidates before scheduling

2018-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/556821
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=378bef70db4ca7437d716ac3e1b11a2a789f919f
Submitter: Zuul
Branch:master

commit 378bef70db4ca7437d716ac3e1b11a2a789f919f
Author: Surya Seetharaman 
Date:   Tue Mar 27 13:00:21 2018 +0200

Scheduling Optimization: Remove cell0 from the list of candidates

This patch removes cell0 from the list of loaded cells in the
cache before scheduling. Doing this improves performance slightly.

Closes-Bug: #1759306

Change-Id: I660e431305de491ff19d4a99ca801d19f4ed754b


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1759306

Title:
  Scheduler host manager does not filter out cell0 from the list of cell
  candidates before scheduling

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently the nova scheduler does not filter out cell0 from the list
  of cells that it considers for scheduling. Since cell0 does not have
  any computes and is only a graveyard cell, it should be filtered out
  from the global cell cache that is maintained by the host_manager
  
(https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L639)
  and should not be a candidate cell for scheduling. It will anyways be
  filtered out due to not having computes every time during scheduling,
  however if this is done earlier on, it will optimize the scheduling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1759306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760902] [NEW] Standard attributes are missing in segment response

2018-04-03 Thread Hongbin Lu
Public bug reported:

The response of segment resource doesn't contain standard attributes,
i.e. created_at, updated_at, revision_number. These attributes should be
visible in the response as other resources.

** Affects: neutron
 Importance: Undecided
 Assignee: Hongbin Lu (hongbin.lu)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hongbin Lu (hongbin.lu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1760902

Title:
  Standard attributes are missing in segment response

Status in neutron:
  New

Bug description:
  The response of segment resource doesn't contain standard attributes,
  i.e. created_at, updated_at, revision_number. These attributes should
  be visible in the response as other resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1760902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760894] [NEW] very long rpc_loop in neutron openvswitch agent

2018-04-03 Thread Piotr Misiak
Public bug reported:

release: Pike

ML2 with DVR, L3_HA and L2 population

When I removed 100 VMs and then spawned new 100 VMs, a dozen of new VMs
ended in ERROR state because Nova gave up waiting for a "VIF plugged in"
event from Neutron.

I figured out that just before spawning new 100 VMs, in neutron
openvswitch agent the rpc_loop started new iteration to remove a dozen
ports which were used by old (just removed) VMs.

The rpc_loop iteration took 465 seconds (almost 8 minutes) and during
this time, Nova timed out waiting 300 seconds for "VIF plugged in" event
during spawning new VMs.

Looks like the most of the time was spend running hundreds of conntrack
entries deleting commands.

You will find the rpc_loop DEBUG log in the attachment.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1760894

Title:
  very long rpc_loop in neutron openvswitch agent

Status in neutron:
  New

Bug description:
  release: Pike

  ML2 with DVR, L3_HA and L2 population

  When I removed 100 VMs and then spawned new 100 VMs, a dozen of new
  VMs ended in ERROR state because Nova gave up waiting for a "VIF
  plugged in" event from Neutron.

  I figured out that just before spawning new 100 VMs, in neutron
  openvswitch agent the rpc_loop started new iteration to remove a dozen
  ports which were used by old (just removed) VMs.

  The rpc_loop iteration took 465 seconds (almost 8 minutes) and during
  this time, Nova timed out waiting 300 seconds for "VIF plugged in"
  event during spawning new VMs.

  Looks like the most of the time was spend running hundreds of
  conntrack entries deleting commands.

  You will find the rpc_loop DEBUG log in the attachment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1760894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1759510] Re: Image import fails with python 3.5, image stuck in importing state

2018-04-03 Thread Erno Kuvaja
** Also affects: glance/queens
   Importance: Undecided
   Status: New

** Changed in: glance/queens
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1759510

Title:
  Image import fails with python 3.5, image stuck in importing state

Status in Glance:
  Fix Released
Status in Glance queens series:
  New

Bug description:
  New image import api with glance-direct or web-download fails to
  import image if cloud is using python3.5. Image stuck in importing
  state forever.

  Steps:
  1. Ensure you are running glance on python 3.5
     Add below lines in your devstack/local.conf
     WSGI_MODE=mod_wsgi
     USE_PYTHON3=True
     PYTHON3_VERSION=3.5

  2. Source devstack/openrc using "$ source devstack/openrc admin admin"

  3. Create image using new import api
     $ glance image-create-via-import --container-format ami --disk-format ami 
--name cirros_image --file 

  g-api logs:

  Mar 28 08:36:01 ubuntu-3 glance-api[23449]:  |__Flow 'api_image_import': 
TypeError: Unicode-objects must be encoded before hashing
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor Traceback (most recent call last):
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py",
 line 53, in _execute_task
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor result = task.execute(**arguments)
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/opt/stack/glance/glance/async/flows/api_image_import.py", line 218, in execute
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor image_import.set_image_data(image, file_path 
or self.uri, self.task_id)
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/opt/stack/glance/glance/common/scripts/image_import/main.py", line 154, in 
set_image_data
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor "task_id": task_id})
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/usr/local/lib/python3.5/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor self.force_reraise()
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/usr/local/lib/python3.5/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor six.reraise(self.type_, self.value, self.tb)
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor raise value
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/opt/stack/glance/glance/common/scripts/image_import/main.py", line 146, in 
set_image_data
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor image.set_data(data_iter)
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/opt/stack/glance/glance/domain/proxy.py", line 195, in set_data
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor self.base.set_data(data, size)
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File "/opt/stack/glance/glance/notifier.py", 
line 480, in set_data
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor _send_notification(notify_error, 
'image.upload', msg)
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/usr/local/lib/python3.5/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor self.force_reraise()
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/usr/local/lib/python3.5/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor six.reraise(self.type_, self.value, self.tb)
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File 
"/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor raise value
  Mar 28 08:36:01 ubuntu-3 glance-api[23449]: ERROR 
glance.async.taskflow_executor   File "/opt/stack/glance/glance/notifier.py", 
l

[Yahoo-eng-team] [Bug 1753964] Re: Image remains in queued state for web-download when node_staging_uri end with "/"

2018-04-03 Thread Erno Kuvaja
** Summary changed:

- Image remains in queued state for web-download when node_staging_uri uses 
default value
+ Image remains in queued state for web-download when node_staging_uri end with 
"/"

** Also affects: glance/queens
   Importance: Undecided
   Status: New

** Changed in: glance/queens
   Importance: Undecided => High

** Changed in: glance/queens
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1753964

Title:
  Image remains in queued state for web-download when node_staging_uri
  end with "/"

Status in Glance:
  In Progress
Status in Glance queens series:
  Triaged

Bug description:
  The node_staging_uri has a default of file:///tmp/staging/
  see: 
https://github.com/openstack/glance/blob/a0aaa614712090e7cab19dc35a155c32ea8f2190/glance/common/config.py#L680

  If operator does not set 'node_staging_uri' in glance-api.conf then
  importing image using web-download remains in queued state.

  Steps to reproduce:
  1. Ensure glance-api is running under mod_wsgi (add WSGI_MODE=mod_wsgi in 
local.conf and run stack.sh)
  2. Do not set node_staging_uri in glance-api.conf

  3. Create image using below curl command:
  curl -i -X POST -H "x-auth-token: " 
http://192.168.0.13:9292/v2/images -d 
'{"container_format":"bare","disk_format":"raw","name":"Import web-download"}'

  4. Import image using below curl command:
  curl -i -X POST -H "Content-type: application/json" -H "x-auth-token: 
" 
http://192.168.0.13:9292/v2/images//import -d 
'{"method":{"name":"web-download","uri":"https://www.openstack.org/assets/openstack-logo/2016R/OpenStack-Logo-Horizontal.eps.zip"}}'

  Expected result:
  Image should be in active state.

  Actual result:
  Image remains in queued state.

  API Logs:
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: DEBUG glance_store.backend [-] 
Attempting to import store file {{(pid=3506) _load_store 
/usr/local/lib/python2.7/dist-packages/glance_store/backend.py:231}}
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: DEBUG glance_store.capabilities 
[-] Store glance_store._drivers.filesystem.Store doesn't support updating 
dynamic storage capabilities. Please overwrite 'update_capabilities' method of 
the store to implement updating logics if needed. {{(pid=3506) 
update_capabilities 
/usr/local/lib/python2.7/dist-packages/glance_store/capabilities.py:97}}
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: Traceback (most recent call last):
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py", line 82, in 
_spawn_n_impl
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: func(*args, **kwargs)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/domain/proxy.py", line 238, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/notifier.py", line 581, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: super(TaskProxy, 
self).run(executor)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/domain/proxy.py", line 238, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/domain/proxy.py", line 238, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/domain/__init__.py", line 438, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: 
executor.begin_processing(self.task_id)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 144, in 
begin_processing
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: super(TaskExecutor, 
self).begin_processing(task_id)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/async/__init__.py", line 63, in begin_processing
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self._run(task_id, task.type)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 165, in _run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: flow = self._get_flow(task)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 134, in _get_flow
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: invoke_kwds=kwds).driver
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/usr/local/lib/python2.7/dist-packages/stevedore/driver.py", line 61, in 
__init__
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: 
warn_on_missing_entrypoint=warn_on_missing_entrypoint
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/usr/local/lib/python2.7/dist-packages/stevedore/named.py", line 81, in 
__init__
  Mar 07 09:26:07 ubun

[Yahoo-eng-team] [Bug 1753661] Re: missing description field for instance update/rebuild

2018-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/308822
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2ad84cb34ff6a6b72c8a65fd91f2a731a149ab3f
Submitter: Zuul
Branch:master

commit 2ad84cb34ff6a6b72c8a65fd91f2a731a149ab3f
Author: liyingjun 
Date:   Wed Mar 7 09:13:48 2018 +0800

Support description for instance update/rebuild

In Nova Compute API microversion 2.19, you can specify a description
attribute when creating, rebuilding, or updating a server instance. This
description can be retrieved by getting server details, or list details
for servers, this patch adds support for this attribute for instance in
horizon.
This patch adds description for instance update/rebuild

Change-Id: I1c561607551fe6ed521772688b643cb27400e24e
Closes-bug: #1753661


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1753661

Title:
  missing description field for instance update/rebuild

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Description is supported when creating instance, this bug is reported
  to request support update description for instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1753661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747500] Re: SCSS entries specific to fwaas was not migrated from horizon

2018-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/551726
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4034f92acce3775b332913007ec8a9fa8311c28f
Submitter: Zuul
Branch:master

commit 4034f92acce3775b332913007ec8a9fa8311c28f
Author: Akihiro Motoki 
Date:   Sun Mar 11 09:45:20 2018 +0900

Drop FWaaS related SCSS entries

They are forgot to split out from horizon.
They already have been moved to neutron-fwaas-dashboard
in https://review.openstack.org/#/c/541055/,
so we can safely drop them from horizon.

Change-Id: I8e74ed93085733f781c99423b6b1c7b15f5ea244
Closes-Bug: #1747500


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1747500

Title:
  SCSS entries specific to fwaas was not migrated from horizon

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Neutron FWaaS dashboard:
  Fix Released

Bug description:
  Some FWaaS specific SCSS entries are remained in horizon. They need to
  be migrated to neutron-fwaas-dashboard.

  
https://github.com/openstack/horizon/blob/bf8c8242dab85020dfed8d639022f9ced00403e6/openstack_dashboard/static/dashboard/scss/components/_resource_topology.scss#L208-L213

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1747500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760843] [NEW] Idp's domain is not unique

2018-04-03 Thread wangxiyuan
Public bug reported:

Backing to the patch https://review.openstack.org/#/c/399684/ when the
domain_id is added  for Idp, I assume that the domain_id is designed for
unique?

The domain_id in Idp model is unique:
https://github.com/openstack/keystone/blob/master/keystone/federation/backends/sql.py#L58

But the db migration is not:
https://github.com/openstack/keystone/blob/master/keystone/common/sql/expand_repo/versions/012_expand_add_domain_id_to_idp.py#L55-L56

They are not in consistence. Since I don't know the origin purpose for
the change, I'm not very sure which is correct but I prefer the first
one.

What's more, our unit test use resource models to init db so that it's
unique in our unit test sqlite db. It's not in consistence as well with
the real usage.

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  Backing to the patch https://review.openstack.org/#/c/399684/ when the
- domain_id is added  for Idp, I assume that the domain_id is designed
- unique.
+ domain_id is added  for Idp, I assume that the domain_id is designed for
+ unique?
  
  The domain_id in Idp model is unique:
  
https://github.com/openstack/keystone/blob/master/keystone/federation/backends/sql.py#L58
  
  But the db migration is not:
  
https://github.com/openstack/keystone/blob/master/keystone/common/sql/expand_repo/versions/012_expand_add_domain_id_to_idp.py#L55-L56
  
  They are not in consistence. Since I don't know the origin purpose for
  the change, I'm not very sure which is correct but I prefer the first
  one.
  
  What's more, our unit test use resource models to init db so that it's
  unique in our unit test sqlite db. It's not in consistence as well with
  the real usage.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1760843

Title:
  Idp's domain is not unique

Status in OpenStack Identity (keystone):
  New

Bug description:
  Backing to the patch https://review.openstack.org/#/c/399684/ when the
  domain_id is added  for Idp, I assume that the domain_id is designed
  for unique?

  The domain_id in Idp model is unique:
  
https://github.com/openstack/keystone/blob/master/keystone/federation/backends/sql.py#L58

  But the db migration is not:
  
https://github.com/openstack/keystone/blob/master/keystone/common/sql/expand_repo/versions/012_expand_add_domain_id_to_idp.py#L55-L56

  They are not in consistence. Since I don't know the origin purpose for
  the change, I'm not very sure which is correct but I prefer the first
  one.

  What's more, our unit test use resource models to init db so that it's
  unique in our unit test sqlite db. It's not in consistence as well
  with the real usage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1760843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760832] [NEW] Shows external networks even if they are not shared

2018-04-03 Thread Dmitriy R.
Public bug reported:

Hi,

I think, that this case might be somehow related to the one, described
here: https://bugs.launchpad.net/horizon/+bug/1384975

Steps to reproduce are the same:
Login as tenant "A", create couple of networks:
- Network 1, external network shared.
- Network 2, external network (not-shared).

Now Login as tenant "B", go to project and network listing page:
- We see as well as shared external networks, as not shared ones. 
- Not shared networks are displayed without subnets, and they are not available 
for instance/port creation

I was able to hide not shared external networks (while shared ones are
still present) with setting include_external=False here
https://github.com/openstack/horizon/blob/9ca9b5cd81db29bddf6dbcc5fc535009a9ec63b0/openstack_dashboard/dashboards/project/networks/views.py#L55

So this setting makes expected changes and behaviour. Probably, there is
a reason, why this was hardcoded, and current behaviour is required for
some features. But probably it's worth to consider to move this to the
settings section?

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: pike

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1760832

Title:
  Shows external networks even if they are not shared

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi,

  I think, that this case might be somehow related to the one, described
  here: https://bugs.launchpad.net/horizon/+bug/1384975

  Steps to reproduce are the same:
  Login as tenant "A", create couple of networks:
  - Network 1, external network shared.
  - Network 2, external network (not-shared).

  Now Login as tenant "B", go to project and network listing page:
  - We see as well as shared external networks, as not shared ones. 
  - Not shared networks are displayed without subnets, and they are not 
available for instance/port creation

  I was able to hide not shared external networks (while shared ones are
  still present) with setting include_external=False here
  
https://github.com/openstack/horizon/blob/9ca9b5cd81db29bddf6dbcc5fc535009a9ec63b0/openstack_dashboard/dashboards/project/networks/views.py#L55

  So this setting makes expected changes and behaviour. Probably, there
  is a reason, why this was hardcoded, and current behaviour is required
  for some features. But probably it's worth to consider to move this to
  the settings section?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1760832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760809] [NEW] delete domain raise 500 error if the domain contains idp

2018-04-03 Thread wangxiyuan
Public bug reported:

How to reproduce:
1. create domain
   openstack domain create domainA
2. create a idp to the domainA
   openstack identity provider create idpA --domain domainA
3. disable the domainA and then try to delete it.
   openstack domain set domainA --disable
   openstack domain delete domainA

Expect result:
The domainA and its related resource will be deleted

Actual result:
Keystone raise 500 internal error

reproduce example:
http://paste.openstack.org/show/718241/

** Affects: keystone
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => wangxiyuan (wangxiyuan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1760809

Title:
  delete domain raise 500 error if the domain contains idp

Status in OpenStack Identity (keystone):
  New

Bug description:
  How to reproduce:
  1. create domain
 openstack domain create domainA
  2. create a idp to the domainA
 openstack identity provider create idpA --domain domainA
  3. disable the domainA and then try to delete it.
 openstack domain set domainA --disable
 openstack domain delete domainA

  Expect result:
  The domainA and its related resource will be deleted

  Actual result:
  Keystone raise 500 internal error

  reproduce example:
  http://paste.openstack.org/show/718241/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1760809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp