[Yahoo-eng-team] [Bug 1702466] Re: Subnet details page fails when a subnet uses IPv6 with prefix delegation

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/510302
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5de6b3eb14e414364bc115225d6067e3a87ad0ac
Submitter: Zuul
Branch:master

commit 5de6b3eb14e414364bc115225d6067e3a87ad0ac
Author: Akihiro Motoki 
Date:   Sat Oct 7 16:02:50 2017 +

Show subnet detail with prefix_delegation subnetpool properly

Subnet pool ID 'prefix_delegation' is a special subnet pool in Neutron
and there is no real subnet pool with ID 'prefix_delegation',
so we need to skip subnetpool_get() call
if a subnet has 'prefix_delegation' subnet pool.

This commit also adds unit tests which covers subnet pool operations
in the subnet detail view.

Change-Id: I3227a92084ee79b60d2b10262ed94a034e396306
Closes-Bug: #1702466


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1702466

Title:
  Subnet details page fails when a subnet uses IPv6 with prefix
  delegation

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  New

Bug description:
  Package: openstack-dashboard
  Description:Ubuntu 16.04.2 LTS
  Release:16.04
  openstack-dashboard:
Installed: 2:9.1.2-0ubuntu1
Candidate: 2:9.1.2-0ubuntu1

  Description: problem occurs when using an IPv6 subnet with prefix
  delegation (IPv4 subnets are ok). When using web interface, if we try
  to see subnet details in menu  System -> Networks -> Click on network
  name -> Click on subnet name we got the following error:

  neutron-server[22658]: 2017-07-04 16:29:21.186 22864 INFO neutron.wsgi
  [req-0fbaad92-83ff-4179-8056-8521fbef5ff9
  e85813733498d6d6b6678b77d4756aaddef55bba18166be7d3bc3a8d20a65fd8
  769822d4bf984cfeb3ab910cec9fa5b3 - - -] 192.168.0.1 - - [04/Jul/2017
  16:29:21] "GET /v2.0/subnetpools/prefix_delegation.json HTTP/1.1" 404
  266 0.007587

  The subnet is working fine. Here is the output of opentack CLI 'show'
  statement:

  $ openstack subnet show nti-subnet-ipv6

  
+---+-+
  | Field | Value   
|
  
+---+-+
  | allocation_pools  | 
2001:DB8:1400:5026::2-2001:DB8:1400:5026::::|
  | cidr  | 2001:DB8:1400:5026::/64 
|
  | created_at| 2016-09-12T13:01:15 
|
  | description   | 
|
  | dns_nameservers   | 2001:DB8:1400:2127::FFFE
|
  | enable_dhcp   | True
|
  | gateway_ip| 2001:DB8:1400:5026::1   
|
  | host_routes   | 
|
  | id| 129bd534-7121-4759-93a2-68ac907edf74
|
  | ip_version| 6   
|
  | ipv6_address_mode | dhcpv6-stateless
|
  | ipv6_ra_mode  | dhcpv6-stateless
|
  | name  | nti-subnet-ipv6 
|
  | network_id| d204d9a2-bb8a-48af-bfa0-f0438970d98b
|
  | project_id| 769822d4bf984cfeb3ab910cec9fa5b3
|
  | subnetpool_id | prefix_delegation   
|
  | updated_at| 2017-05-31T18:59:50 
|
  
+---+
 +

  Trying to find the problem, I found the file
  openstack_dashboard/dashboards/project/networks/subnets/views.py with
  this code (comments are mine):

  class DetailView(tabs.TabView):
  tab_group_class = project_tabs.SubnetDetailTabs
  template_name = 'horizon/common/_detail.html'
  page_title = "{{ subnet.name|default:subnet.id }}"

  @memoized.memoized_method
  def get_data(self):
  subnet_id = self.kwargs['subnet_id']
  try:
  subnet = api.neutron.subnet_get(self.request, subnet_id)
  except Exception:
  subnet = []
  msg = _('Unable to retrieve subnet details.')
  exceptions.handle(self.request, msg,
redirect=self.get_redirect_url())
  else:
  if subnet.ip_version == 6:
  ipv6_modes = utils.get_ipv6_modes_menu_from_attrs(
  subnet.ipv6_ra_mode, 

[Yahoo-eng-team] [Bug 1648064] Re: Error "Table 'neutron.ml2_vlan_allocations' doesn't exist" in neutron server

2017-10-16 Thread Launchpad Bug Tracker
[Expired for openstack-ansible because there has been no activity for 60
days.]

** Changed in: openstack-ansible
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648064

Title:
  Error "Table 'neutron.ml2_vlan_allocations' doesn't exist" in neutron
  server

Status in neutron:
  Expired
Status in openstack-ansible:
  Expired

Bug description:
  Hello,

  This issue (Table 'neutron.ml2_vlan_allocations' doesn't exist")
  appears in the neutron.conf after a deploy in openstack-ansible. Our
  deploy is working fine, and I see no functional test issues, so this
  seems a low priority issue. This message appears on both trusty and
  Xenial.

  I guess this is a migration issue, and I don't know what we are doing wrong.
  Anyway, this is the log [1]. 

  You can see in the same file the neutron release BUT this bug appeared in 
earlier releases. 
  We just never paid attention because everything functionally works.
  Sadly, I cannot trace back further than what our gates store. I can tell you 
it's earlier than 2016-10-21 [2] (which is really old. SHA for the checked-out 
version is: 287bb35e167143388ab3d069af209341a75430f3). 
  That also means the bug probably appeared in newton cycle.

  Any recent commit in neutron role will have these issues listed, so we
  can reproduce it quite easily with the latest sha's too. All the
  neutron logs available in our gates if you want.

  Best regards,

  JP.

  ===

  [1]: http://logs.openstack.org/24/391524/48/check/gate-openstack-
  ansible-os_neutron-ansible-func-ubuntu-
  xenial/9c75e15/logs/openstack/openstack1/neutron/neutron-
  server.log.txt.gz#_2016-12-04_15_45_58_790

  [2]: http://logs.openstack.org/05/389705/1/check/gate-openstack-
  ansible-os_neutron-ansible-func-ubuntu-
  xenial/b83b5e3/logs/openstack/openstack1/neutron/neutron-
  server.log.txt.gz#_2016-10-24_17_44_10_157

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1648064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648064] Re: Error "Table 'neutron.ml2_vlan_allocations' doesn't exist" in neutron server

2017-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648064

Title:
  Error "Table 'neutron.ml2_vlan_allocations' doesn't exist" in neutron
  server

Status in neutron:
  Expired
Status in openstack-ansible:
  Expired

Bug description:
  Hello,

  This issue (Table 'neutron.ml2_vlan_allocations' doesn't exist")
  appears in the neutron.conf after a deploy in openstack-ansible. Our
  deploy is working fine, and I see no functional test issues, so this
  seems a low priority issue. This message appears on both trusty and
  Xenial.

  I guess this is a migration issue, and I don't know what we are doing wrong.
  Anyway, this is the log [1]. 

  You can see in the same file the neutron release BUT this bug appeared in 
earlier releases. 
  We just never paid attention because everything functionally works.
  Sadly, I cannot trace back further than what our gates store. I can tell you 
it's earlier than 2016-10-21 [2] (which is really old. SHA for the checked-out 
version is: 287bb35e167143388ab3d069af209341a75430f3). 
  That also means the bug probably appeared in newton cycle.

  Any recent commit in neutron role will have these issues listed, so we
  can reproduce it quite easily with the latest sha's too. All the
  neutron logs available in our gates if you want.

  Best regards,

  JP.

  ===

  [1]: http://logs.openstack.org/24/391524/48/check/gate-openstack-
  ansible-os_neutron-ansible-func-ubuntu-
  xenial/9c75e15/logs/openstack/openstack1/neutron/neutron-
  server.log.txt.gz#_2016-12-04_15_45_58_790

  [2]: http://logs.openstack.org/05/389705/1/check/gate-openstack-
  ansible-os_neutron-ansible-func-ubuntu-
  xenial/b83b5e3/logs/openstack/openstack1/neutron/neutron-
  server.log.txt.gz#_2016-10-24_17_44_10_157

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1648064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711428] Re: cloud-init sometimes fails on dpkg lock due to concurrent apt-daily-upgrade.service execution

2017-10-16 Thread Launchpad Bug Tracker
[Expired for apt (Ubuntu) because there has been no activity for 60
days.]

** Changed in: apt (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1711428

Title:
  cloud-init sometimes fails on dpkg lock due to concurrent apt-daily-
  upgrade.service execution

Status in cloud-init:
  Incomplete
Status in apt package in Ubuntu:
  Expired

Bug description:
  This is the same problem as https://bugs.launchpad.net/cloud-
  init/+bug/1693361, but with a different APT invoking service.  In this
  case it is apt-daily-upgrade.service.

  So, I guess add apt-daily-upgrade.service to the Before line in
  /lib/systemd/system/cloud-final.service along side apt-daily.service.

  Or wait for an APT fix.  Or retry APT commands when executing
  "packages:"

  Reporting this to save someone else trouble, but I think we'll be
  rolling back to Trusty at this point.  Hopefully the B LTS will have
  an alternative to systemd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1711428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723982] Re: Commands that do not exist appear in help message of nova-manage

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/512324
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=86a535a2689537d81e8a98a9d28ddf1d233dcddc
Submitter: Zuul
Branch:master

commit 86a535a2689537d81e8a98a9d28ddf1d233dcddc
Author: Takashi NATSUME 
Date:   Mon Oct 16 22:45:09 2017 +0900

Fix nova-manage commands that do not exist

The following commands do not exist.
But they appear in help message.
So fix them.

* nova-manage db print_dict
* nova-manage cell parse_server_string

Change-Id: I7b16b8ab36b9a9ae719bf98a75511d82041d0152
Closes-Bug: #1723982


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723982

Title:
  Commands that do not exist appear in help message of nova-manage

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  The following commands do not exist.
  But they appear in the help message of nova-manage.

  * nova-manage cell parse_server_string
  * nova-manage db print_dict

  Steps to reproduce
  ==

  stack@devstack-master:~$ nova-manage cell
  usage: nova-manage cell [-h] {create,delete,list,parse_server_string} ...
  nova-manage cell: error: too few arguments
  stack@devstack-master:~$ nova-manage cell parse_server_string
  An error has occurred:
  Traceback (most recent call last):
File "/opt/stack/nova/nova/cmd/manage.py", line 1861, in main
  fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
File "/opt/stack/nova/nova/cmd/common.py", line 187, in get_action_fn
  missing = validate_args(fn, *fn_args, **fn_kwargs)
File "/opt/stack/nova/nova/cmd/common.py", line 76, in validate_args
  if six.get_method_self(fn) is not None:
  AttributeError: 'function' object has no attribute 'im_self'

  stack@devstack-master:~$ nova-manage db
  usage: nova-manage db [-h]

{archive_deleted_rows,ironic_flavor_migration,null_instance_uuid_scan,online_data_migrations,print_dict,sync,version}
...
  nova-manage db: error: too few arguments
  stack@devstack-master:~$ nova-manage db print_dict
  An error has occurred:
  Traceback (most recent call last):
File "/opt/stack/nova/nova/cmd/manage.py", line 1861, in main
  fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
File "/opt/stack/nova/nova/cmd/common.py", line 187, in get_action_fn
  missing = validate_args(fn, *fn_args, **fn_kwargs)
File "/opt/stack/nova/nova/cmd/common.py", line 76, in validate_args
  if six.get_method_self(fn) is not None:
  AttributeError: 'function' object has no attribute 'im_self'

  Environment
  ===

  nova master(commit 1148a2d67b70dd767f798e1dfe0b4c7634d2f90c)
  OS: Ubuntu 16.04.2 LTS

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1723982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723120] Re: Copy-paste style error in documentation "Compute schedulers"

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/511839
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=aff78ac53e55ef5dca118e79b36052ee54ea4f38
Submitter: Zuul
Branch:master

commit aff78ac53e55ef5dca118e79b36052ee54ea4f38
Author: evikbas 
Date:   Fri Oct 13 15:09:13 2017 +0200

doc: Fix command output in scheduler document

Change-Id: If2fe3dcac8d32d8a3f83f4db8e2a0b41ac15e888
Closes-Bug: #1723120


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723120

Title:
  Copy-paste style error in documentation "Compute schedulers"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output.

  Description
  ===
  There is a (copy-paste style) error in this document chapter:
  
https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#example-specify-compute-hosts-with-ssds
  The below command output does not reflect correctly the changes of the 
previous command:
  'node1' was previously added to the list of hosts in the aggregate, but it is 
not displayed in the printout.

  Expected result
  ===

  $ openstack aggregate add host 1 node2
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | nova |
  | created_at| 2016-12-22T07:31:13.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | [u'node1', u'node2']
   |
  | id| 1|
  | metadata  | {u'ssd': u'true', u'availability_zone': u'nova'} |
  | name  | fast-io  |
  | updated_at| None |
  +---+--+

  Actual result
  =

  $ openstack aggregate add host 1 node2
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | nova |
  | created_at| 2016-12-22T07:31:13.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | [u'node2']   |
  | id| 1|
  | metadata  | {u'ssd': u'true', u'availability_zone': u'nova'} |
  | name  | fast-io  |
  | updated_at| None |
  +---+--+

  ---
  Release: 16.0.0.0rc2.dev691 on 2017-10-11 02:43
  SHA: 219c2660cdc936c9d1469d7629645e05a511fbf0
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/configuration/schedulers.rst
  URL: 
https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1723120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713574] Re: python 3 errors with memcache enabled

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/510241
Committed: 
https://git.openstack.org/cgit/openstack/keystonemiddleware/commit/?id=74455d80575aa174db0217c5eae905eacab42d78
Submitter: Zuul
Branch:master

commit 74455d80575aa174db0217c5eae905eacab42d78
Author: Tin Lam 
Date:   Thu Oct 5 21:47:30 2017 -0500

Fix py3 byte/string error

This patch set corrects a problem when the keystonemiddleware is
executed with memcache encryption enabled.  Currently, the
hmac.new() calls throw exceptions in python3 due to how py2 and py3
handles string vs. byte/bytearray.

Co-Authored-By: Rohan Arora 

Closes-Bug: #1713574
Change-Id: I9bb291be48a094b9f266a8459a3f51ee163d33a3


** Changed in: keystonemiddleware
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1713574

Title:
  python 3 errors with memcache enabled

Status in OpenStack Identity (keystone):
  Invalid
Status in keystoneauth:
  New
Status in keystonemiddleware:
  Fix Released

Bug description:
  Hi, we are using gnocchi 4 running the following:

  keystoneauth1 (3.1.0)
  keystonemiddleware (4.14.0)
  python-keystoneclient (3.13.0)

  with python 3.5.4

  on a configuration file like this :

  [keystone_authtoken]
  signing_dir = /var/cache/gnocchi
  project_domain_name = default
  user_domain_name = default
  signing_dir = /var/cache/gnocchi
  auth_uri = http://yourmomkeystone.com:5000/v3
  auth_url = http://yourmomkeystone.com:35357/v3
  project_name = admin
  password = porotito
  username = cloudadmin
  auth_type = password
  auth_type = password
  memcached_servers = yourmommecached:11211
  insecure=true
  endpoint_type = internal
  region_name = yourmomregion
  memcache_security_strategy = ENCRYPT
  memcache_secret_key = lalalalalalaalala

  After the api starts, the token is asked successfully, but we have
  this stacktrace when trying to use memcached.

  2017-08-28 20:12:41,029 [7] CRITICAL root: Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 131, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 196, in 
call_func
  return self.func(req, *args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/oslo_middleware/base.py", line 
131, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python3.5/site-packages/webob/request.py", line 1316, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python3.5/site-packages/webob/request.py", line 1280, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python3.5/site-packages/paste/urlmap.py", line 216, in 
__call__
  return app(environ, start_response)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 131, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 196, in 
call_func
  return self.func(req, *args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/oslo_middleware/base.py", line 
131, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python3.5/site-packages/webob/request.py", line 1316, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python3.5/site-packages/webob/request.py", line 1280, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 131, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 196, in 
call_func
  return self.func(req, *args, **kwargs)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 331, in __call__
  response = self.process_request(req)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 622, in process_request
  resp = super(AuthProtocol, self).process_request(request)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 404, in process_request
  allow_expired=allow_expired)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 434, in _do_fetch_token
  data = self.fetch_token(token, **kwargs)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 736, in fetch_token
  cached = self._cache_get_hashes(token_hashes)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 719, in _cache_get_hashes
  cached = 

[Yahoo-eng-team] [Bug 1527008] Re: neutron-ml2_ofa.rst lacks some options

2017-10-16 Thread Boden R
Best I can tell from comment #6 and the current neutron/docs/source
tree, this is no longer valid and doc'd in neutron.

If it is, please reopen and reference the openstack docs site URL that has the 
doc needing updating.
thanks

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527008

Title:
  neutron-ml2_ofa.rst lacks some options

Status in neutron:
  Invalid
Status in openstack-manuals:
  Won't Fix

Bug description:
  neutron-ml2_ofa.rst[1] should contain get_datapath_retry_times and 
physical_interface_mappings options.
  These are defined as follows:

  http://git.openstack.org/cgit/openstack/networking-
  ofagent/tree/networking_ofagent/plugins/ofagent/common/config.py

  A third party repository seems not to be  refered when the rst file are 
generated.
  neutron-ml2_brocade.rst seems to be the same, too.
  And openstack-manuals does not contain rst table files for other third party.

  I'm not sure if openstack-manuals should have rst table files for
  these third party drivers in this point.

  
  [1] 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/config-ref-rst/source/tables/neutron-ml2_ofa.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718512] Re: migration fails if instance build failed on destination host

2017-10-16 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/newton
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova/pike
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1718512

Title:
  migration fails if instance build failed on destination host

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  (OpenStack Nova, commit d8b30c3772, per OSA-14.2.7)

  if an instance build fails on a hypervisor the "retry" field of the
  instance's request spec is populated with which host and how many
  times it attempted to retry the build. this field remains populated
  during the life-time of the instance.

  if a live-migration for the same instance is requested, the conductor
  loads this request spec and passes it on to the scheduler. the
  scheduler will fail the migration request on RetryFilter since the
  target was already known to have failed (albeit, for the build).

  with the help of mriedem and melwitt of #openstack-nova, we determined
  that migration retries are handled separately from build retries.
  mriedem suggested a patch to ignore the retry field of the instance
  request spec during migrations. this patch allowed the failing
  migration to succeed.

  it is important to note that it may fail the migration again, however
  there is still sufficient reason to ignore the build's
  failures/retries during a migration.

  12:55 < mriedem> it does stand to reason that if this instance failed to 
build originally on those 2 hosts, that live migrating it there might fail 
too...but we don't know why it originally failed, could have been a resource 
claim issue at the time
  12:58 < melwitt> yeah, often it's a failed claim. and also what if that 
compute host is eventually replaced over the lifetime of the cluster, making it 
a fresh candidate for several instances that might still avoid it because they 
once failed to build there back when it was a different machine

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1718512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718954] Re: network_data.json contains default routes for 2 interfaces

2017-10-16 Thread Boden R
Best I can tell, from a neutron perspective this is now being driven under the 
referenced RFE [1].
Under that assumption I'm getting this one off the bug queue.
If [1] doesn't cover this defect, please feel free to open and provide some 
details on how this differs from [1].

Thanks


[1] https://bugs.launchpad.net/neutron/+bug/1717560

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718954

Title:
  network_data.json contains default routes for 2 interfaces

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  On an OpenStack Ocata install a guest with two network interfaces
  attached is provided with network_data.json that describes 2 default
  routes.

  See attached network_data.json and (pretty formatted 
network_data-pretty.json).
  Also note that the reporter ran 'dhclient -v' which created the attached
  dhclient.leases file.  Only one of the dhcp servers returns a 'routers'
  option.  That would seem to indicate that the platform has some distinction.

  Cloud-init renders the networking in /etc/network/interfaces and ends
  up with 2 default routes, which doesn't do what the user wanted.

  This issue was originally raised in freenode #cloud-init.
  There is discussion surrounding it with the original reporter at:
    https://irclogs.ubuntu.com/2017/09/22/%23cloud-init.html

  Related bugs:
   * bug 1717560: allow to have no default route in DHCP host routes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720091] Re: delete running vms, but ovs flow table is still residual

2017-10-16 Thread Boden R
Moving this back to fix released. It's not clear why it was moved back to "New" 
on 10-03. If there's still an issue, please update with additional information 
as to why its re-opened and the original fix isn't sufficient.
Thanks

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1720091

Title:
  delete running vms, but ovs flow table is still residual

Status in neutron:
  Fix Released

Bug description:
  in Pike version, if I delete running vms, the ovs flow table will be
  still residual.

  for examples:

  the first, create vm, named pc1
  [root@bogon ~]# nova list
  
+--+--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  
+--+--+++-+--+
  | 2f91523c-6a4f-434a-a228-0d07ca735e6a | pc1 | ACTIVE | - | Running | 
net=5.5.5.13 |
  
+--+--+++-+--+

  the second,I directly delete the running virtual machine,

  [root@bogon ~]# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+

  then the relevant flow table will be left.

  [root@bogon ~]# ovs-ofctl dump-flows br-int | grep 5.5.5.13
   cookie=0x8231b3d9ff6eecde, duration=189.590s, table=82, n_packets=0, 
n_bytes=0, idle_age=189, 
priority=70,ct_state=+est-rel-rpl,ip,reg6=0x1,nw_src=5.5.5.13 
actions=conjunction(2,1/2)
   cookie=0x8231b3d9ff6eecde, duration=189.589s, table=82, n_packets=0, 
n_bytes=0, idle_age=189, 
priority=70,ct_state=+new-est,ip,reg6=0x1,nw_src=5.5.5.13 
actions=conjunction(3,1/2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1720091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724065] [NEW] Requesting an out of range microversion without an accept header in placement results in a KeyError

2017-10-16 Thread Chris Dent
Public bug reported:

If the placement service (as of microversion 1.10) if you request a
microversion that is outside the acceptable range of VERSIONS and do
_not_ include an 'Accept' header in the request, there is a 500 and a
KeyError while webob tries to look up the Accept header.

The issue is in FaultWrapper, so the problem can be dealt with elsewhere
in placement WSGI stack.

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724065

Title:
  Requesting an out of range microversion without an accept header in
  placement results in a KeyError

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  If the placement service (as of microversion 1.10) if you request a
  microversion that is outside the acceptable range of VERSIONS and do
  _not_ include an 'Accept' header in the request, there is a 500 and a
  KeyError while webob tries to look up the Accept header.

  The issue is in FaultWrapper, so the problem can be dealt with
  elsewhere in placement WSGI stack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1650287] Re: Remove redundant parentheses

2017-10-16 Thread Akihiro Motoki
This kind of things can be improved when a related portion is touched.
We don't need to clean them up only for consistency. It bothers
reviewers a lot.

** Changed in: horizon
 Assignee: liao...@hotmail.com (liaozd) => (unassigned)

** Changed in: horizon
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1650287

Title:
  Remove redundant parentheses

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  some parentheses in the file
  horizon/openstack_dashboard/dashboards/project/volumes/volumes/forms.py
  are not necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1650287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1025306] Re: horizon should support -hint options when creating an instance

2017-10-16 Thread Akihiro Motoki
As of Pike, Scheduler hints is now supported in the Angular Launch
Instance form, so I believe the requested feature is achieved. Marking
this as Invalid.

** Changed in: horizon
 Assignee: Zhenguo Niu (niu-zglinux) => (unassigned)

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1025306

Title:
  horizon should support  -hint options when  creating an instance

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  with this support user  can use the scheduler filters options  such as
  force_hosts ingore_host when create an instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1025306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556805] Re: The modal footer on create/edit network and subnet is different.

2017-10-16 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1386370 ***
https://bugs.launchpad.net/bugs/1386370

Per review comment, this is a duplicate.

** This bug has been marked a duplicate of bug 1386370
   Create Subnet missing cancel button & back/Next button spacing issue

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1556805

Title:
  The modal footer on create/edit network and subnet is different.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In create/edit network and subnet modal, footer layout is different.
  In Network, this modal has a "Cancel" button but subnet modal does not have.
  In addition, the position of "Prev" is also different.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1556805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476657] Re: Attempt to upload image via ftp makes queued image

2017-10-16 Thread Akihiro Motoki
horizon cannot know which types of URL are supported, so the glance API
should handle it appropriately and return an error when unsupported.
This is an job of horizon (even though horizon can display some message
based on an exception message or a response code). Marking this as
Invalid. Feel free to file a new bug if the situation has changed since
the bug was reported and there is any implemented in the horizon side.

** Changed in: horizon
 Assignee: Vlad Okhrimenko (vokhrimenko) => (unassigned)

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476657

Title:
  Attempt to upload image via ftp makes queued image

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Attempt to upload image via ftp creates queued image.

  Steps:
  1. Login to Horizon.
  2. Go to Images tab.
  3 .Click Create Image.
  4. Specify Name and type FTP location instead of HTTP.

  Actual:
  Image with permanent "queued" status is created.

  Expected to get some warning about unsupported protocol.
  For example glance CLI does not allow to upload via FTP and warns user by 
words:
  "400 Bad Request
  ...
  External source are not supported"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560790] Re: Horizon UI shows duplicated Hypervisors information (hostname) in the "Hypervisors" section

2017-10-16 Thread Akihiro Motoki
Based on the capture image file, it is an issue of nova. If you really
want to remove the duplicates, it should be handled in the nova API
layer. there is nothing to do in horizon.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1560790

Title:
  Horizon UI shows duplicated Hypervisors information (hostname) in the
  "Hypervisors" section

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  
  When connecting 2 different datastores to the same cluster from Vmware, the 
hypervisor information are duplicated.
  Need to pint that, even their ID are different which you can see when running 
nova hypervisor-list command, but it still difficult for users to recognize 
them, which makes Horizon UI not so kind. So we should figure it more useful 
and meaningful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1560790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533046] Re: dashborad appear duplication of the same data

2017-10-16 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1528465 ***
https://bugs.launchpad.net/bugs/1528465

** This bug has been marked a duplicate of bug 1528465
   dashboard project network column display duplicate default public network 
randomly (with admin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1533046

Title:
  dashborad appear duplication of the same data

Status in OpenStack Dashboard (Horizon):
  Incomplete

Bug description:
  on my dashborad -> Project -> Network -> Networks :
  that appear  duplication of my public network which is the same one.

  
  #neurton net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | d5eb23fb-069b-4642-ae82-11657d9f0b73 | private | 
179822df-5490-4090-bb89-93f981849639 fdc4:2769:5ba1::/64 |
  |  | | 
60bc8942-2979-4870-8a9f-49aac7e44a29 10.0.0.0/24 |
  | 9e0c1b5d-439f-4540-8d1e-e5eb08a7a02d | public  | 
4eead244-8e38-4d34-8e5e-f4a4314fd08e 2001:db8::/64   |
  |  | | 
7b692067-e7fc-4721-8707-476275b30d1b 172.24.4.0/24   |
  
+--+-+--+

  
  but i can see three netwoks on my dashborad..

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1533046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361529] Re: Configurable instance detail tabs

2017-10-16 Thread Akihiro Motoki
The console tab can be disabled by setting CONSOLE_TYPE to None in 
local_settings.py.
https://docs.openstack.org/horizon/latest/configuration/settings.html#console-type

** Changed in: horizon
 Assignee: daiki aminaka (1991-daiki) => (unassigned)

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361529

Title:
  Configurable instance detail tabs

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  For example, VMware environment doesn't support the console log feature.
  In that case, usability will improve if the console log tab is hidden.
  This should be configurable by local_settings.py:
  ---
  # Available values are 'OverviewTab', 'LogTab', 'ConsoleTab'
  HORIZON_CONFIG["instance_detail_tabs"] = ('ConsoleTab', 'OverviewTab')
  ---

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1395434] Re: Horizon RBAC - (need to) Hide tab if no permissions available

2017-10-16 Thread Akihiro Motoki
The situation has changed a lot since the bug was reported. horizon now
provides the pluggable panel mechanism and operators can disable a
specific panel if they want. This is no longer a bug.

** Changed in: horizon
 Assignee: Timur Sufiev (tsufiev-x) => (unassigned)

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1395434

Title:
  Horizon RBAC - (need to) Hide tab if no permissions available

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Assume I (as sys admin) want to hide LBAAS services from tenant
  owners, therefore I changed every LBAAS feature (such as create pool,
  update vip, delete member etc.) in neutron_policy.json to
  rule:admin_only.

  Current result : LBAAS tab is accessiable (for tenant's owners) with
  no content nor permissions for creating/updating/deleting

  Expected result (in my opinion): hide LBAAS tab

  Comment:
  LBAAS is just an example, I think that every feature's tab without 
permissions should be hidden, unless it has important data to present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1395434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481518] Re: pass params to wizards

2017-10-16 Thread Akihiro Motoki
It looks like completed.

** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1481518

Title:
  pass params to wizards

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The app catalog plugin needs to pass off parameters to several of the
  Horizon wizards. Several of them don't quite accept all the needed
  parameters though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1481518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497358] Re: Creating instance with Boot from instance (creates new volume) timesout before image can be downloaded

2017-10-16 Thread Akihiro Motoki
horizon provides a way to create a volume from a image based on a cinder
feature. We don't have a plan to add some orchestration to address the
issue reported here. If a user want to handle a larger image with a
volume, we strongly suggest to create a image first.

The volume device mapping in the nova API allows to create a volume from
an existing image and nova can use a service token to overcome user
token expiration during a long operation (from Pike?), so the problem
will be gone sooner or later (or already?).

This bug is about horizon, so mark this as Invalid.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497358

Title:
  Creating instance with Boot from instance (creates new volume)
  timesout before image can be downloaded

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When creating a new instance with the option to create a new volume
  from an image and that image is large, instance creation can time out
  before the image has had a chance to download.

  With a larger image size (like Windows, but could be done with others) this 
process takes a long time. The instance creation times out with the default 
settings and the instance is cleaned up. But in the background the download is 
still happening.

  
  So far, our Cinder driver is not involved in any way in this process. 
  

  
  Once the image download finally completes it then finally calls in to the 
driver to create the volume. I would think this would be done first to make 
sure there is a volume to transfer the image to in the first place. It then 
attaches the volume to the compute host, then gets the failure rollback and 
deletes the volume.

  
  If you watch the syslog (tail -f /var/log/syslog) while all this is happening 
you can see way after horizon errors out some messages from  the kernel that is 
discovered a new iSCSI device, then shortly afterwards a warning that it 
received an indication that the LUNassignments have changed (from 
the device removal).

  Horizon displays the message:

  Error: Build of instance 84bba509-1727-4c32-83c4-925f91f12c6f aborted:
  Block Device Mapping is Invalid

  In the n-cpu.log there is a failure with the message:

  VolumeNotCreated: Volume f5818ef3-c21d-44d4-b2e6-9996d4ac7bec did not
  finish being created even after we waited 214 seconds or 61 attempts.
  And its status is creating.

  Someone from the nova team thought this could be the case if
  everything is being passed to the novaclient rather than performing
  the operations directly (with the volume likely being created first)
  like the tempest tests do for boot images. It could be argued that the
  novaclient should be updated to perform this correctly via that path,
  but this surfaces, and could also be fixed, at the horizon level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724043] [NEW] l3-agent network_update() can throw an excetion if router ex_gw_port is None

2017-10-16 Thread Brian Haley
Public bug reported:

I've seen this error a couple of times in downstream testing, but looks
like it could be just as broken upstream:

2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server 
[req-a1152197-d8b1-4e34-ae63-8a94fd69ebcd faf66db1b6de4c56a9925b9e5aa3369d 
c977198dffa24e6f8e9e8c8c4cf3211c - - -] Exception during message handling: 
TypeError: 'NoneType' object has no attribute '__getitem__'
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, 
in dispatch
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _do_dispatch
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 446, in 
network_update
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server if 
any(port_belongs(p) for p in ports):
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 446, in 

2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server if 
any(port_belongs(p) for p in ports):
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 445, in 

2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server port_belongs 
= lambda p: p['network_id'] == network_id
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server TypeError: 
'NoneType' object has no attribute '__getitem__'
2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server

Since ri.ex_gw_port can be None, that can cause an exception when
looking for ports we might have in that network.  Here's an example if
the port is None:

  ports = itertools.chain(ri.internal_ports, [ri.ex_gw_port])

>>> import itertools
>>> foo = itertools.chain([], [None])
>>> port_belongs = lambda p: p['network_id'] == network_id
>>> any(port_belongs(p) for p in foo)
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 1, in 
  File "", line 1, in 
TypeError: 'NoneType' object has no attribute '__getitem__'

We need to check if it's None and use [] instead.

** Affects: neutron
 Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1724043

Title:
  l3-agent network_update() can throw an excetion if router ex_gw_port
  is None

Status in neutron:
  In Progress

Bug description:
  I've seen this error a couple of times in downstream testing, but
  looks like it could be just as broken upstream:

  2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server 
[req-a1152197-d8b1-4e34-ae63-8a94fd69ebcd faf66db1b6de4c56a9925b9e5aa3369d 
c977198dffa24e6f8e9e8c8c4cf3211c - - -] Exception during message handling: 
TypeError: 'NoneType' object has no attribute '__getitem__'
  2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming
  2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, 
in dispatch
  2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-10-09 18:36:29.507 126916 ERROR oslo_messaging.rpc.server   File 

[Yahoo-eng-team] [Bug 1328402] Re: Term consistency in System Info dashboard

2017-10-16 Thread Akihiro Motoki
As of Queens-1, this bug no longer exists.

** Changed in: horizon
 Assignee: Ganesh (ganeshna) => (unassigned)

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1328402

Title:
  Term consistency in System Info dashboard

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Just a minor nit,
  In System info table, In compute services tab, Status is enabled/disabled 
whereas in Services and Block device services, the Status is Enabled/Disabled.
  IMHO, we should change the Status in compute services tab to:

  enabled/disabled --> Enabled/Disabled, to maintain consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1328402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356738] Re: Multiple regions in keystone create unexpected results in dashboard

2017-10-16 Thread Akihiro Motoki
There is nothing to do as the upstream development. It is now a topic on
how to manage django_openstack_auth version as operators or distribution
packagers. Marking this as Invalid (as the upstream fix was done as a
separate bug as mentioned in #3 and #4).

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1356738

Title:
  Multiple regions in keystone create unexpected results in dashboard

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When there are multiple regions in the dashboard you can have all kinds of 
issues:
  * The region you which you will get will be the lowest ID in the database, 
this is very inconvenient
  * Feature disparity will break the dashboard (e.g. one region having security 
groups and another does not)
  * When an API endpoint in any region is broken the dashboard can break 
(should be fixed with  https://review.openstack.org/#/c/106101/ )

  I've made a small feature for the dashboard that will filter Regions from the 
service catalog.
  This means that you will only get to see one region on that dashboard. 
  (e.g. you can run multiple dashboards for different regions)

  If this is acceptable for your environment it will fix all issues with 
multiple regions.
  I'll try to get the patch attached to this bugreport ;)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1356738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260617] Re: Provide the ability to attach volumes in the read-only mode

2017-10-16 Thread Akihiro Motoki
As horizon, it totally depends on nova, so there is nothing to do until
nova supports it. It is a long-standing wishlist and there is no
progress. From the bug maintenance perspective, the horizon team removes
horizon from the affected project. In future, if the feature is
implemented, feel free to file a new bug and a blueprint to horizon.

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260617

Title:
  Provide the ability to attach volumes in the read-only mode

Status in OpenStack Compute (nova):
  Opinion
Status in python-novaclient:
  Triaged

Bug description:
  Cinder now support the ability to attach volumes in the read-only
  mode, this should be exposed through horizon. Read-only mode could be
  ensured by hypervisor configuration during the attachment. Libvirt,
  Xen, VMware and Hyper-V support R/O volumes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346741] Re: Enable ACPI call for Stop/Terminate

2017-10-16 Thread Akihiro Motoki
horizon depends on nova API, so there is nothing to do in horizon side
with the current situation. Marking this as Invalid.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346741

Title:
  Enable ACPI call for Stop/Terminate

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Add a "Stop Instance" button to Horizon, so it will be possible to
  shutdown a instance using ACPI call (like running "virsh shutdown
  instance-00XX" directly at the Compute Node.

  Currently, the Horizon button "Shut Off Instance" just destroy it.

  I'm not seeing a way to gracefully shutdown a instance from Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724039] [NEW] ocata: potential AttributeError on libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY_DEV

2017-10-16 Thread Matt Riedemann
Public bug reported:

We merged this backport in stable/ocata which went into the 15.0.6 and
15.0.7 releases:

https://review.openstack.org/#/c/471353/

But later realized (when reviewing the stable/newton proposed backport)
that the VIR_DOMAIN_BLOCK_REBASE_COPY_DEV flag is only available to
libvirt starting in 1.2.9.

We only started requiring libvirt >= 1.2.9 starting in the pike release,
and in ocata we require libvirt >= 1.2.1.

So anyone with libvirt < 1.2.9 that picks up this change in ocata and is
doing a swap volume operation with a block volume could hit an
AttributeError.

https://libvirt.org/git/?p=libvirt.git;a=blob;f=docs/news-2014.html.in

https://libvirt.org/git/?p=libvirt.git;a=commit;h=b7e73585a8d96677695a52bafb156f26cbd48fb5

** Affects: nova
 Importance: Undecided
 Status: Invalid

** Affects: nova/ocata
 Importance: Medium
 Status: Triaged


** Tags: libvirt volumes

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => Triaged

** Changed in: nova
   Status: Triaged => Invalid

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724039

Title:
  ocata: potential AttributeError on
  libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY_DEV

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) ocata series:
  Triaged

Bug description:
  We merged this backport in stable/ocata which went into the 15.0.6 and
  15.0.7 releases:

  https://review.openstack.org/#/c/471353/

  But later realized (when reviewing the stable/newton proposed
  backport) that the VIR_DOMAIN_BLOCK_REBASE_COPY_DEV flag is only
  available to libvirt starting in 1.2.9.

  We only started requiring libvirt >= 1.2.9 starting in the pike
  release, and in ocata we require libvirt >= 1.2.1.

  So anyone with libvirt < 1.2.9 that picks up this change in ocata and
  is doing a swap volume operation with a block volume could hit an
  AttributeError.

  https://libvirt.org/git/?p=libvirt.git;a=blob;f=docs/news-2014.html.in

  
https://libvirt.org/git/?p=libvirt.git;a=commit;h=b7e73585a8d96677695a52bafb156f26cbd48fb5

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366059] Re: Missing some entries in Default Quotas display compared to CLI output (floating_ips, fixed_ips, security_groups and security_group_rules)

2017-10-16 Thread Akihiro Motoki
The Nova API does not support the network related quotas mentioned here
in the latest API version (as of Pike), so I am marking this bug as
Won't Fix.

Note that neutron started to provide the default quota value since Pike
release. This can be supported as blueprint, but it is a separate topic.

** Summary changed:

- Missing some entries in Default Quotas display compared to CLI output 
+ Missing some entries in Default Quotas display compared to CLI output 
(floating_ips, fixed_ips, security_groups and security_group_rules)

** Changed in: horizon
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1366059

Title:
  Missing some entries in Default Quotas display compared to CLI output
  (floating_ips, fixed_ips, security_groups and security_group_rules)

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  OpenStack version: Icehouse

  Issue: Missing some entries in Default Quotas display compared to CLI
  output, namely: floating_ips, fixed_ips, security_groups and
  security_group_rules

  Steps to reproduce:
  1. Log in Horizon Dashboard as admin.
  2. Navigate Admin -> System Panel -> System Info -> Default Quotas tab
  3. Notices the missing entries.

  Output from CLI:
  ubuntu@Controller:/etc/neutron$ nova quota-defaults
  +-+---+
  | Quota   | Limit |
  +-+---+
  | instances   | 10|
  | cores   | 20|
  | ram | 51200 |
  | floating_ips| 10|<
  | fixed_ips   | -1| <
  | metadata_items  | 128   |
  | injected_files  | 5 |
  | injected_file_content_bytes | 10240 |
  | injected_file_path_bytes| 255   |
  | key_pairs   | 100   |
  | security_groups | 10| <
  | security_group_rules| 20| <
  +-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1366059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361758] Re: Keystone should bootstrap CONF.member_role_name

2017-10-16 Thread Akihiro Motoki
The default member role in horizon can be configured via
OPENSTACK_KEYSTONE_DEFAULT_ROLE setting. IIRC Long ago the default
member role in horizon was hardcoded, but it is no longer true for a
long time. Marking this as Invalid in the horizon side.

** Changed in: horizon
   Status: New => Invalid

** Tags removed: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1361758

Title:
  Keystone should bootstrap CONF.member_role_name

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Identity (keystone):
  Opinion

Bug description:
  Keystone should bootstrap CONF.member_role_name.  As of now , it is
  created on  first create_user call .  In case of LDAP backend there is
  no create_user call, so we will be missing this role.   Horizon will
  not work without this role.

  Just like "default" domain, we should also bootstrap
  CONF.member_role_name  via keystone-manage db-synch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605842] Re: Some explanation of permissions 'my.openstack.permission'?

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/500653
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4d32b1fbe9107c1474944794fe3e1876ad21dc1d
Submitter: Zuul
Branch:master

commit 4d32b1fbe9107c1474944794fe3e1876ad21dc1d
Author: Amelia Cordwell 
Date:   Tue Sep 5 13:44:17 2017 +1200

Add permissions explanation to quickstart doc

* Also fixes part of the example which showed having a custom
  permission that could not exist in the way that
  django_openstack_auth's keystone backend was implemented.

Change-Id: I46e748302d34f82648ef6690e2d5db4618487a6a
Closes-Bug: #1605842


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1605842

Title:
  Some explanation of permissions 'my.openstack.permission'?

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In this link
  http://docs.openstack.org/developer/horizon/quickstart.html, there is
  no explanation for permission 'my.openstack.permission' in section
  Panel while this is important for doc readers, especially for a
  developer who would like to customize permissions. Thanks.

  class Images(horizon.Panel):
  name = "Images"
  slug = 'images'
  permissions = ('openstack.roles.admin', 'my.openstack.permission',)
  policy_rules = (('endpoint', 'endpoint:rule'),)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1605842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716431] Re: pool of FIP is not displayed when FIP is not associated in Floating IPs tab

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/508777
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=aec3a33a2231ecdcc023bb534a90d898aad4c3dc
Submitter: Zuul
Branch:master

commit aec3a33a2231ecdcc023bb534a90d898aad4c3dc
Author: Max Yatsenko 
Date:   Sun Oct 1 20:14:22 2017 +0300

Fix displaying pool name for floating ip

It fixes an issue when pool name is not
displayed for floating ip in a case when any
floating ip is not associated with an instance.

Change-Id: Ic20ec3709fa7c5313f59a16aa32b975eb004f753
Closes-Bug: #1716431


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1716431

Title:
  pool of FIP is not displayed when FIP is not associated in Floating
  IPs tab

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Version: 9.1.2 Mitaka

  In ;Project > Compute > Access and Security; view in Mitaka dashboard,
  we found out that on Floating IPs tab, in the Pool column, you can
  only see name of the pool if there are floating IPs already associated
  with instances. I believe it is cause by this -
  https://github.com/openstack/horizon/blob/mitaka-
  eol/openstack_dashboard/dashboards/project/access_and_security/tabs.py#L121
  . The for cycle populating pool_name attribute on IP is inside 'if
  attached_instance_ids' condition. We expect to see which pool floating
  IP came from before the association as it was in Liberty. So the
  pool_name attribute should be defined in for cycle outside this if
  condition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1716431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623469] Re: Unable to un-assign a user's primary project

2017-10-16 Thread Gary W. Smith
This appears to be a limitation in the keystone API.

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1623469

Title:
  Unable to un-assign a user's primary project

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  The steps to produce the bug:
   1/ Login to openstack with user name: admin
   2/ Go to Identity -> Users -> Edit --> To update "admin" user
   3/ Choose primary project (No value assigned now) for admin user is admin 
--> Update user successful
   4/ go to Edit of admin user again and try to move back it on old value i.e 
'blank' but no option is there and have to select this or other available value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1623469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657341] Re: availability zone is wrong when create volume from image

2017-10-16 Thread Gary W. Smith
*** This bug is a duplicate of bug 1721286 ***
https://bugs.launchpad.net/bugs/1721286

Thanks for the bug report.  Even thought this bug report came first,
marking as a duplicate since the other one has additional info.

** This bug has been marked a duplicate of bug 1721286
   Create volume from image displays incorrect AZs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1657341

Title:
  availability zone is wrong when create volume from image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Dear sir,

  I have problem about horizon dashboard.

  create new volume availability zone is not correct when it is created from 
Images -> create volume.
  availability zone is displayed about nova, not cinder, so availability zone 
not found error happens.

  Further information, please check attached image.

  Our openstack environment is below.
  Openstack version: Newton
  OS: Ubuntu 16.04 LTS

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1657341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723982] [NEW] Commands that do not exist appear in help message of nova-manage

2017-10-16 Thread Takashi NATSUME
Public bug reported:

Description
===

The following commands do not exist.
But they appear in the help message of nova-manage.

* nova-manage cell parse_server_string
* nova-manage db print_dict

Steps to reproduce
==

stack@devstack-master:~$ nova-manage cell
usage: nova-manage cell [-h] {create,delete,list,parse_server_string} ...
nova-manage cell: error: too few arguments
stack@devstack-master:~$ nova-manage cell parse_server_string
An error has occurred:
Traceback (most recent call last):
  File "/opt/stack/nova/nova/cmd/manage.py", line 1861, in main
fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
  File "/opt/stack/nova/nova/cmd/common.py", line 187, in get_action_fn
missing = validate_args(fn, *fn_args, **fn_kwargs)
  File "/opt/stack/nova/nova/cmd/common.py", line 76, in validate_args
if six.get_method_self(fn) is not None:
AttributeError: 'function' object has no attribute 'im_self'

stack@devstack-master:~$ nova-manage db
usage: nova-manage db [-h]
  
{archive_deleted_rows,ironic_flavor_migration,null_instance_uuid_scan,online_data_migrations,print_dict,sync,version}
  ...
nova-manage db: error: too few arguments
stack@devstack-master:~$ nova-manage db print_dict
An error has occurred:
Traceback (most recent call last):
  File "/opt/stack/nova/nova/cmd/manage.py", line 1861, in main
fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
  File "/opt/stack/nova/nova/cmd/common.py", line 187, in get_action_fn
missing = validate_args(fn, *fn_args, **fn_kwargs)
  File "/opt/stack/nova/nova/cmd/common.py", line 76, in validate_args
if six.get_method_self(fn) is not None:
AttributeError: 'function' object has no attribute 'im_self'

Environment
===

nova master(commit 1148a2d67b70dd767f798e1dfe0b4c7634d2f90c)
OS: Ubuntu 16.04.2 LTS

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: In Progress


** Tags: nova-manage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723982

Title:
  Commands that do not exist appear in help message of nova-manage

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  The following commands do not exist.
  But they appear in the help message of nova-manage.

  * nova-manage cell parse_server_string
  * nova-manage db print_dict

  Steps to reproduce
  ==

  stack@devstack-master:~$ nova-manage cell
  usage: nova-manage cell [-h] {create,delete,list,parse_server_string} ...
  nova-manage cell: error: too few arguments
  stack@devstack-master:~$ nova-manage cell parse_server_string
  An error has occurred:
  Traceback (most recent call last):
File "/opt/stack/nova/nova/cmd/manage.py", line 1861, in main
  fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
File "/opt/stack/nova/nova/cmd/common.py", line 187, in get_action_fn
  missing = validate_args(fn, *fn_args, **fn_kwargs)
File "/opt/stack/nova/nova/cmd/common.py", line 76, in validate_args
  if six.get_method_self(fn) is not None:
  AttributeError: 'function' object has no attribute 'im_self'

  stack@devstack-master:~$ nova-manage db
  usage: nova-manage db [-h]

{archive_deleted_rows,ironic_flavor_migration,null_instance_uuid_scan,online_data_migrations,print_dict,sync,version}
...
  nova-manage db: error: too few arguments
  stack@devstack-master:~$ nova-manage db print_dict
  An error has occurred:
  Traceback (most recent call last):
File "/opt/stack/nova/nova/cmd/manage.py", line 1861, in main
  fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
File "/opt/stack/nova/nova/cmd/common.py", line 187, in get_action_fn
  missing = validate_args(fn, *fn_args, **fn_kwargs)
File "/opt/stack/nova/nova/cmd/common.py", line 76, in validate_args
  if six.get_method_self(fn) is not None:
  AttributeError: 'function' object has no attribute 'im_self'

  Environment
  ===

  nova master(commit 1148a2d67b70dd767f798e1dfe0b4c7634d2f90c)
  OS: Ubuntu 16.04.2 LTS

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1723982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199536] Re: Move dict test matchers into testtools

2017-10-16 Thread Michael Turek
** Changed in: ironic
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199536

Title:
  Move dict test matchers into testtools

Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Compute (nova):
  Won't Fix
Status in oslo-incubator:
  Won't Fix
Status in testtools:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Triaged

Bug description:
  Reduce code duplication by pulling DictKeysMismatch, DictMismatch and
  DictMatches from glanceclient/tests/matchers.py into a library (e.g.
  testtools)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1199536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718356] Re: Include default config files in python wheel

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/506192
Committed: 
https://git.openstack.org/cgit/openstack/designate/commit/?id=e486a50f77ad09e04a6cb20bfc40c798d9161535
Submitter: Zuul
Branch:master

commit e486a50f77ad09e04a6cb20bfc40c798d9161535
Author: Jesse Pretorius 
Date:   Thu Sep 21 15:04:03 2017 +0100

Include all rootwrap filters when building wheels

The current method of specifying each rootwrap filter
in the file list is prone to errors when adding or
removing filters. Instead of relying on a manually
maintained list this patch just includes all the files
of the correct naming convention from the applicable
folder. This is simpler and easier to maintain.

Change-Id: I116efd3ff1799965bb46da785b2ad96c7f5b97c5
Closes-Bug: #1718356


** Changed in: designate
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718356

Title:
  Include default config files in python wheel

Status in Barbican:
  In Progress
Status in Cinder:
  In Progress
Status in Cyborg:
  In Progress
Status in Designate:
  Fix Released
Status in Fuxi:
  New
Status in Glance:
  In Progress
Status in OpenStack Heat:
  In Progress
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kuryr-libnetwork:
  New
Status in Magnum:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in octavia:
  Invalid
Status in openstack-ansible:
  Confirmed
Status in Sahara:
  Fix Released
Status in OpenStack DBaaS (Trove):
  In Progress
Status in Zun:
  In Progress

Bug description:
  The projects which deploy OpenStack from source or using python wheels
  currently have to either carry templates for api-paste, policy and
  rootwrap files or need to source them from git during deployment. This
  results in some rather complex mechanisms which could be radically
  simplified by simply ensuring that all the same files are included in
  the built wheel.

  A precedence for this has already been set in neutron [1], glance [2]
  and designate [3] through the use of the data_files option in the
  files section of setup.cfg.

  [1] 
https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39
  [2] 
https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21
  [3] 
https://github.com/openstack/designate/blob/25eb143db04554d65efe2e5d60ad3afa6b51d73a/setup.cfg#L30-L37

  This bug will be used for a cross-project implementation of patches to
  normalise the implementation across the OpenStack projects. Hopefully
  the result will be a consistent implementation across all the major
  projects.

  A mailing list thread corresponding to this standard setting was begun:
  http://lists.openstack.org/pipermail/openstack-dev/2017-September/122794.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1718356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723905] Re: instace boot failure after added vmware neutron vlan

2017-10-16 Thread Matt Riedemann
I'm not sure why you'd see AgentError in the scheduler, or for vmware,
as AgentError is only raised for the xenapi virt driver.

Can you provide more information about your setup and how to reproduce
the bug, and what version of nova you're using?

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723905

Title:
  instace boot failure after added vmware neutron vlan

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  ERROR log from /var/log/nova/scheduler.log

  AgentError: Error during following call to agent: ['ovs-vsctl', '--
  timeout=120', '--', '--if-exists', 'del-port', u'qvode822a12-b5',
  '--', 'add-port', 'vlan-937', u'qvode822a12-b5', '--', 'set',
  'Interface', u'qvode822a12-b5', u'external-ids:iface-id=de822a12-b508
  -4fad-99ad-3a8f84ca10aa', 'external-ids:iface-status=active', u
  'external-ids:attached-mac=fa:16:3e:b4:16:5b', 'external-ids:vm-
  uuid=21b3c36f-1945-4e35-a7e2-961130a2ca43']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1723905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723928] Re: In case of volume_use_multipath=True, Nova unable to fetch CONF.libvirt.volume_use_multipath value from nova.conf

2017-10-16 Thread Matt Riedemann
Nova is just passing this value through to os-brick, so I don't really
see anything wring with what Nova is doing here, so I've added os-brick
to this bug report.

Please confirm if you are restarting the nova-compute service after
modifying nova.conf and before attaching the volume.

** Also affects: os-brick
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723928

Title:
  In case of volume_use_multipath=True, Nova unable to fetch
  CONF.libvirt.volume_use_multipath value from nova.conf

Status in OpenStack Compute (nova):
  Incomplete
Status in os-brick:
  New

Bug description:
  Issue :-
  --
  when we place 'volume_use_multipath=True' in nova.conf. while attaching the 
volume to an instance, 'connector' dictionary passed to cinder's 
initialize_connection() has multipath=False (i.e connector['multipath']=False)

  Expected :-
  --
  This should be connector['multipath']=True since i have place 
'volume_use_multipath=True'

  connector
  {'wwpns': [u'1000d4c9ef76a1d1', u'1000d4c9ef76a1d5'], 'wwnns': 
[u'2000d4c9ef76a1d1', u'2000d4c9ef76a1d5'], 'ip': '10.50.0.155', 'initiator': 
u'iqn.1993-08.org.debian:01:db6bf10a0db', 'platform': 'x86_64', 'host': 
'cld6b11', 'do_local_attach': False, 'os_type': 'linux2', 'multipath': False}

  
  Steps to reproduce :-
  
  1) Place volume_use_multipath=True in nova.conf libvirt section
  [libvirt]
  live_migration_uri = qemu+ssh://stack@%s/system
  cpu_mode = none
  virt_type = kvm
  volume_use_multipath = True

  2) Create a lvm volume
  3) Create a instance and try to attach.

  Note :- 
  -
  This multipath functionality worked fine in Ocata. but from pike and current 
(queens) release this is not working as expected.

  connector dictionary in ocata release :-
  connector
  {u'wwpns': [u'100038eaa73005a1', u'100038eaa73005a5'], u'wwnns': 
[u'200038eaa73005a1', u'200038eaa73005a5'], u'ip': u'10.50.128.110', 
u'initiator': u'iqn.1993-08.org.debian:01:d7f1c5d25e0', u'platform': u'x86_64', 
u'host': u'cld6b10', u'do_local_attach': False, u'os_type': u'linux2', 
u'multipath': True}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1723928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600304] Re: _update_usage_from_migrations() can end up processing stale migrations

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339715
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0bf9c91bb7a98d0ba8a0565d60936262e635
Submitter: Zuul
Branch:master

commit 0bf9c91bb7a98d0ba8a0565d60936262e635
Author: Chris Friesen 
Date:   Fri Oct 28 03:57:37 2016 -0600

Filter out stale migrations in resource audit

When doing the resource audit there is a subtle bug in the current
code.  The problem arises if:

1) You have one or more stale migrations which didn't complete
properly that involve the current compute node.

2) The instance from the uncompleted migration is currently doing a
resize/migration that does not involve the current compute node.

When this happens, _update_usage_from_migrations() will be passed in
the stale migration, and the instance is in fact in a resize state,
so the current compute node will erroneously account for the instance.

The fix is to check that the instance migration ID matches the ID
of the migration being analyzed.  This will work because in the case
of the stale migration we will have hit the error case in
_pair_instances_to_migrations(), and so the instance will be
lazy-loaded from the DB, ensuring that its migration ID is up-to-date.

If the IDs don't match, we'll set the migration status to "error" (to
prevent retrieving that migration the next time) and skip updating
the usage from the stale migration.

Closes-Bug: #1600304
Change-Id: I6f5ad01cb1392db3e2b71e322c5be353de9071a2


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600304

Title:
  _update_usage_from_migrations() can end up processing stale migrations

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I recently found a bug in Mitaka, and it appears to be still present
  in master.

  I was testing a separate patch by doing resizes, and bugs in my code
  had resulted in a number of incomplete resizes involving compute-1.  I
  then did a resize from compute-0 to compute-0, and saw compute-1's
  resource usage go up when it ran the resource audit.

  This got me curious, so I went digging and discovered a gap in the current 
resource audit logic.  The problem arises if:
  
  1) You have one or more stale migrations which didn't complete
  properly that involve the current compute node.
  
  2) The instance from the uncompleted migration is currently doing a
  resize/migration that does not involve the current compute node.
  
  When this happens, _update_usage_from_migrations() will be passed in the 
stale migration, and since the instance is in fact in a resize state, the 
current compute node will erroneously account for the instance.  (Even though 
the instance isn't doing anything involving the current compute node.)
  
  The fix is to check that the instance migration ID matches the ID of the 
migration being analyzed.  This will work because in the case of the stale 
migration we will have hit the error case in _pair_instances_to_migrations(), 
and so the instance will be lazy-loaded from the DB, ensuring that its 
migration ID is up-to-date.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1600304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720130] Re: nova-manage map_instances is not using the cells info from the API database

2017-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/510844
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=eaa3023502d84502059573e492dac1aa39877207
Submitter: Zuul
Branch:master

commit eaa3023502d84502059573e492dac1aa39877207
Author: Surya Seetharaman 
Date:   Tue Oct 10 11:40:06 2017 +0200

nova-manage map_instances is not using the cells info from the API database

In order to map instances, nova-manage takes the database info of the cell 
from
the config file (so by default this points to cell0 in nova.conf) and so for
every new cell created, in order to do the map_instances, the correct 
--config-file
has to be specified in addition to the --cell_uuid option that is provided.

So it is better if this info is taken from the cell_mappings table inside 
the
nova-api database using the provided --cell_uuid.

Change-Id: Ib86e25bf8c4cd4008eae43c49c74f24b1b63518a
Closes-Bug: #1720130


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1720130

Title:
  nova-manage map_instances is not using the cells info from the API
  database

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In order to map instances, nova-manage takes the DB info of the cells
  from the config file (so by default this points to cell0 in nova.conf)
  and so for every new cell created, in order to do the map_instances,
  the correct config file has to be specified in addition to the
  --cell_uuid option that is provided i.e like this :

  nova-manage --config-file /etc/nova/nova_cell2.conf cell_v2
  map_instances --cell_uuid ""

  So it is better if this info is taken from the cell_mappings table
  inside the API database. Basically query for the database_connection
  column value of the cell provided, from the cell_mappings table in
  nova-api DB and map the unmapped instances to the --cell_uuid that is
  provided.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1720130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723928] [NEW] In case of volume_use_multipath=True, Nova unable to fetch CONF.libvirt.volume_use_multipath value from nova.conf

2017-10-16 Thread Vivek Soni
Public bug reported:

Issue :-
--
when we place 'volume_use_multipath=True' in nova.conf. while attaching the 
volume to an instance, 'connector' dictionary passed to cinder's 
initialize_connection() has multipath=False (i.e connector['multipath']=False)

Expected :-
--
This should be connector['multipath']=True since i have place 
'volume_use_multipath=True'

connector
{'wwpns': [u'1000d4c9ef76a1d1', u'1000d4c9ef76a1d5'], 'wwnns': 
[u'2000d4c9ef76a1d1', u'2000d4c9ef76a1d5'], 'ip': '10.50.0.155', 'initiator': 
u'iqn.1993-08.org.debian:01:db6bf10a0db', 'platform': 'x86_64', 'host': 
'cld6b11', 'do_local_attach': False, 'os_type': 'linux2', 'multipath': False}


Steps to reproduce :-

1) Place volume_use_multipath=True in nova.conf libvirt section
[libvirt]
live_migration_uri = qemu+ssh://stack@%s/system
cpu_mode = none
virt_type = kvm
volume_use_multipath = True

2) Create a lvm volume
3) Create a instance and try to attach.

Note :- 
-
This multipath functionality worked fine in Ocata. but from pike and current 
(queens) release this is not working as expected.

connector dictionary in ocata release :-
connector
{u'wwpns': [u'100038eaa73005a1', u'100038eaa73005a5'], u'wwnns': 
[u'200038eaa73005a1', u'200038eaa73005a5'], u'ip': u'10.50.128.110', 
u'initiator': u'iqn.1993-08.org.debian:01:d7f1c5d25e0', u'platform': u'x86_64', 
u'host': u'cld6b10', u'do_local_attach': False, u'os_type': u'linux2', 
u'multipath': True}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723928

Title:
  In case of volume_use_multipath=True, Nova unable to fetch
  CONF.libvirt.volume_use_multipath value from nova.conf

Status in OpenStack Compute (nova):
  New

Bug description:
  Issue :-
  --
  when we place 'volume_use_multipath=True' in nova.conf. while attaching the 
volume to an instance, 'connector' dictionary passed to cinder's 
initialize_connection() has multipath=False (i.e connector['multipath']=False)

  Expected :-
  --
  This should be connector['multipath']=True since i have place 
'volume_use_multipath=True'

  connector
  {'wwpns': [u'1000d4c9ef76a1d1', u'1000d4c9ef76a1d5'], 'wwnns': 
[u'2000d4c9ef76a1d1', u'2000d4c9ef76a1d5'], 'ip': '10.50.0.155', 'initiator': 
u'iqn.1993-08.org.debian:01:db6bf10a0db', 'platform': 'x86_64', 'host': 
'cld6b11', 'do_local_attach': False, 'os_type': 'linux2', 'multipath': False}

  
  Steps to reproduce :-
  
  1) Place volume_use_multipath=True in nova.conf libvirt section
  [libvirt]
  live_migration_uri = qemu+ssh://stack@%s/system
  cpu_mode = none
  virt_type = kvm
  volume_use_multipath = True

  2) Create a lvm volume
  3) Create a instance and try to attach.

  Note :- 
  -
  This multipath functionality worked fine in Ocata. but from pike and current 
(queens) release this is not working as expected.

  connector dictionary in ocata release :-
  connector
  {u'wwpns': [u'100038eaa73005a1', u'100038eaa73005a5'], u'wwnns': 
[u'200038eaa73005a1', u'200038eaa73005a5'], u'ip': u'10.50.128.110', 
u'initiator': u'iqn.1993-08.org.debian:01:d7f1c5d25e0', u'platform': u'x86_64', 
u'host': u'cld6b10', u'do_local_attach': False, u'os_type': u'linux2', 
u'multipath': True}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1723928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723918] [NEW] nova-placement-api port mismatch

2017-10-16 Thread ebl...@nde.ag
Public bug reported:

Description:
Upgraded first from Mitaka to Newton, then to Ocata, and also upgraded openSUSE 
Leap 42.1 to Leap 42.3, so it's not a fresh installation, but since I couldn't 
find any hint to check the /etc/apache2/vhosts.d/nova-placement-api.conf.sample 
I assume this will also be a problem in a fresh installation.

I followed OBS install guide [1] to configure nova-placement-api:

1. Created the Placement API entry in the service catalog successfully...
2. Created the Placement API service endpoints successfully:
  openstack endpoint create --region RegionOne placement public 
http://controller:8778
  openstack endpoint create --region RegionOne placement internal 
http://controller:8778
  openstack endpoint create --region RegionOne placement admin 
http://controller:8778
3. Synced databases and registered cell0 database successfully...
4. Enabled the placement API Apache vhost:
  mv /etc/apache2/vhosts.d/nova-placement-api.conf.sample 
/etc/apache2/vhosts.d/nova-placement-api.conf
  systemctl reload apache2.service

Expected results:

- nova-compute reports success ("Created resource provider record via placement 
API for resource provider...")
- "nova-status upgrade check" shows upgrade status

Actual results:

- nova-compute reports:
2017-10-16 10:53:31.247 8400 WARNING nova.scheduler.client.report 
[req-20cf1da3-86c4-4b03-bb13-84a54fd38425 - - - - -] Placement API service is 
not responding.
2017-10-16 10:53:31.249 8400 WARNING nova.scheduler.client.report 
[req-20cf1da3-86c4-4b03-bb13-84a54fd38425 - - - - -] Placement API service is 
not responding.
2017-10-16 10:53:31.249 8400 WARNING nova.scheduler.client.report 
[req-20cf1da3-86c4-4b03-bb13-84a54fd38425 - - - - -] Unable to refresh my 
resource provider record

- "nova-status upgrade check" shows connection error:

---cut here---
control1:/var/log/cinder #  nova-status upgrade check
Error:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 456, in main
ret = fn(*fn_args, **fn_kwargs)
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 386, in check
result = func(self)
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 201, in 
_check_placement
versions = self._placement_get("/")
  File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 189, in 
_placement_get
return client.get(path, endpoint_filter=ks_filter).json()
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 758, 
in get
return self.request(url, 'GET', **kwargs)
  File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in 
inner
return wrapped(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 616, 
in request
resp = send(**kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 690, 
in _send_request
raise exceptions.ConnectFailure(msg)
ConnectFailure: Unable to establish connection to http://controller:8778/: 
HTTPConnectionPool(host='controller', port=8778): Max retries exceeded with 
url: / (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 111] 
ECONNREFUSED',))
---cut-here---

The problem is a port misconfiguration in the nova-placement-api vhost, it 
doesn't match the endpoint configuration:
 
---cut-here---
control1:/var/log/cinder # diff -u 
/etc/apache2/vhosts.d/nova-placement-api.conf.dist 
/etc/apache2/vhosts.d/nova-placement-api.conf
--- /etc/apache2/vhosts.d/nova-placement-api.conf.dist  2017-10-16 
12:19:28.503899981 +0200
+++ /etc/apache2/vhosts.d/nova-placement-api.conf   2017-10-16 
12:19:41.363642540 +0200
@@ -1,8 +1,8 @@
 # OpenStack nova-placement-api Apache2 example configuration

-Listen 8780
+Listen 8778

-
+
 WSGIScriptAlias / /srv/www/nova-placement-api/app.wsgi
 WSGIDaemonProcess nova-placement-api processes=2 threads=1 user=nova 
group=nova
 WSGIProcessGroup nova-placement-api
---cut-here---

Changing the port to match the port configured in the endpoints solves
the problem.

[1] https://docs.openstack.org/ocata/install-guide-obs/nova-controller-
install.html#install-and-configure-components

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723918

Title:
  nova-placement-api port mismatch

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  Upgraded first from Mitaka to Newton, then to Ocata, and also upgraded 
openSUSE Leap 42.1 to Leap 42.3, so it's not a fresh installation, but since I 
couldn't find any hint to check the 
/etc/apache2/vhosts.d/nova-placement-api.conf.sample I assume this will also be 
a problem in a fresh installation.

  I followed OBS install guide [1] to configure nova-placement-api:

  1. Created the Placement API entry in the service 

[Yahoo-eng-team] [Bug 1723912] [NEW] Refactor securitygroup fullstack tests

2017-10-16 Thread Dongcan Ye
Public bug reported:

Currently, all securitygroup fullstack tests are located in one method: 
test_securitygroup
As per Jakub Libosvar suggests, we can seperates into one scenario one test.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: fullstack

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723912

Title:
  Refactor securitygroup fullstack tests

Status in neutron:
  New

Bug description:
  Currently, all securitygroup fullstack tests are located in one method: 
test_securitygroup
  As per Jakub Libosvar suggests, we can seperates into one scenario one test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1723912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723905] [NEW] instace boot failure after added vmware neutron vlan

2017-10-16 Thread shuangyang.qian
Public bug reported:

ERROR log from /var/log/nova/scheduler.log

AgentError: Error during following call to agent: ['ovs-vsctl', '--
timeout=120', '--', '--if-exists', 'del-port', u'qvode822a12-b5', '--',
'add-port', 'vlan-937', u'qvode822a12-b5', '--', 'set', 'Interface',
u'qvode822a12-b5', u'external-ids:iface-id=de822a12-b508-4fad-99ad-
3a8f84ca10aa', 'external-ids:iface-status=active', u'external-ids
:attached-mac=fa:16:3e:b4:16:5b', 'external-ids:vm-
uuid=21b3c36f-1945-4e35-a7e2-961130a2ca43']

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723905

Title:
  instace boot failure after added vmware neutron vlan

Status in OpenStack Compute (nova):
  New

Bug description:
  ERROR log from /var/log/nova/scheduler.log

  AgentError: Error during following call to agent: ['ovs-vsctl', '--
  timeout=120', '--', '--if-exists', 'del-port', u'qvode822a12-b5',
  '--', 'add-port', 'vlan-937', u'qvode822a12-b5', '--', 'set',
  'Interface', u'qvode822a12-b5', u'external-ids:iface-id=de822a12-b508
  -4fad-99ad-3a8f84ca10aa', 'external-ids:iface-status=active', u
  'external-ids:attached-mac=fa:16:3e:b4:16:5b', 'external-ids:vm-
  uuid=21b3c36f-1945-4e35-a7e2-961130a2ca43']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1723905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723891] [NEW] Fip agent port created on compute nodes with "dvr_no_external" mode

2017-10-16 Thread sunzuohua
Public bug reported:

I have a devstack with following configuration:
network nodes:
/etc/neutron/l3_agent.ini
[default]
agent_mode = dvr_snat
compute nodes:
/etc/neutron/l3_agent.ini
[default]
agent_mode = dvr_no_external

The reproduction steps
1.Launch VM and assign floating IP to it
2.Check ports for the router
3.You can see that fip agent port for the host hosting the VM is created, but 
not used.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dvr

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723891

Title:
  Fip agent port created on compute nodes with "dvr_no_external" mode

Status in neutron:
  New

Bug description:
  I have a devstack with following configuration:
  network nodes:
  /etc/neutron/l3_agent.ini
  [default]
  agent_mode = dvr_snat
  compute nodes:
  /etc/neutron/l3_agent.ini
  [default]
  agent_mode = dvr_no_external

  The reproduction steps
  1.Launch VM and assign floating IP to it
  2.Check ports for the router
  3.You can see that fip agent port for the host hosting the VM is created, but 
not used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1723891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723880] [NEW] Unshelve instance failed with availability zone which be deleted

2017-10-16 Thread TingtingYu
Public bug reported:

I create an instance with the availability zone named 'test_az' and host named 
'compute', and then shelve the instance, then delete the availability zone 
test_az.
But the instance availability zone is still 'test_az',and unshelve the instance 
failed with the message "No valid host found for unshelve instance"
[root@controller ~(admin)]$ nova show test3
+--+---+
| Property | Value  
   |
+--+---+
| OS-DCF:diskConfig| AUTO   
   |
| OS-EXT-AZ:availability_zone  | test_az
   |
| OS-EXT-SRV-ATTR:host | -  
   |
| OS-EXT-SRV-ATTR:hostname | test3  
   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
   |
| OS-EXT-SRV-ATTR:instance_name| instance-0004  
   |
| OS-EXT-SRV-ATTR:kernel_id|
   |
| OS-EXT-SRV-ATTR:launch_index | 0  
   |
| OS-EXT-SRV-ATTR:ramdisk_id   |
   |
| OS-EXT-SRV-ATTR:reservation_id   | r-3i45u68y 
   |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda   
   |
| OS-EXT-SRV-ATTR:user_data| -  
   |
| OS-EXT-STS:power_state   | 4  
   |
| OS-EXT-STS:task_state| -  
   |
| OS-EXT-STS:vm_state  | shelved_offloaded  
   |
| OS-SRV-USG:launched_at   | 2017-10-16T07:05:18.00 
   |
| OS-SRV-USG:terminated_at | -  
   |
| accessIPv4   |
   |
| accessIPv6   |
   |
| config_drive |
   |
| created  | 2017-10-16T07:05:12Z   
   |
| description  | test3  
   |
| flavor   | m1.small (2)   
   |
| hostId   |
   |
| host_status  |
   |
| id   | 8cf627b7-fe67-405b-92b1-a42a6c66f8a6   
   |
| image| centos_7_2 
(090d5564-9c64-459e-8d10-4382f1c72488) |
| key_name | -  
   |
| locked   | False  
   |
| metadata | {} 
   |
| name | test3  
   |
| os-extended-volumes:volumes_attached | [] 
   |
| security_groups  | default
   |
| status   | SHELVED_OFFLOADED  
   |
| tenant_id| 764169e694474ba0a40030e6c8531704   
   |
| test network | 192.168.0.5
   |
| updated  | 2017-10-16T07:07:33Z   
   |
| user_id  | 2ccb44e4ce2e42d5b805b898f2c10243   
   |
+--+---+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723880

Title:
  Unshelve instance failed with availability zone which be deleted

Status in OpenStack Compute (nova):
  New

Bug description:
  I create an instance with the availability zone named 'test_az' and host 
named 'compute', and then shelve the instance, then delete the availability 
zone test_az.
  But the instance availability zone is still 'test_az',and unshelve the 
instance failed with the message "No valid host 

[Yahoo-eng-team] [Bug 1723856] Re: lbaasv2 tests fail with error

2017-10-16 Thread Rabi Mishra
So reverting 4f627b4e8dfe699944a196fe90e0642cced6278f fixes the lbaas
issue and hence the heat gate.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: heat
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723856

Title:
  lbaasv2 tests fail with error

Status in OpenStack Heat:
  New
Status in neutron:
  New

Bug description:
  Noticed at:

  http://logs.openstack.org/52/511752/3/check/legacy-heat-dsvm-
  functional-convg-mysql-lbaasv2/dcd512d/job-output.txt.gz

  
  lbaasv2 agent log:

  http://logs.openstack.org/52/511752/3/check/legacy-heat-dsvm-
  functional-convg-mysql-
  lbaasv2/dcd512d/logs/screen-q-lbaasv2.txt.gz?#_Oct_16_02_26_51_171646

  
  May be due to https://review.openstack.org/#/c/505701/

  traceback:

  2017-10-16 02:45:43.838922 | primary | 2017-10-16 02:45:43.838 | 
==
  2017-10-16 02:45:43.840365 | primary | 2017-10-16 02:45:43.840 | Failed 2 
tests - output below:
  2017-10-16 02:45:43.842320 | primary | 2017-10-16 02:45:43.841 | 
==
  2017-10-16 02:45:43.843926 | primary | 2017-10-16 02:45:43.843 |
  2017-10-16 02:45:43.845738 | primary | 2017-10-16 02:45:43.845 | 
heat_integrationtests.functional.test_lbaasv2.LoadBalancerv2Test.test_create_update_loadbalancer
  2017-10-16 02:45:43.847384 | primary | 2017-10-16 02:45:43.846 | 

  2017-10-16 02:45:43.848836 | primary | 2017-10-16 02:45:43.848 |
  2017-10-16 02:45:43.850193 | primary | 2017-10-16 02:45:43.849 | Captured 
traceback:
  2017-10-16 02:45:43.851909 | primary | 2017-10-16 02:45:43.851 | 
~~~
  2017-10-16 02:45:43.853340 | primary | 2017-10-16 02:45:43.852 | 
Traceback (most recent call last):
  2017-10-16 02:45:43.855053 | primary | 2017-10-16 02:45:43.854 |   File 
"/opt/stack/new/heat/heat_integrationtests/functional/test_lbaasv2.py", line 
109, in test_create_update_loadbalancer
  2017-10-16 02:45:43.856727 | primary | 2017-10-16 02:45:43.856 | 
parameters=parameters)
  2017-10-16 02:45:43.858396 | primary | 2017-10-16 02:45:43.857 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 437, in 
update_stack
  2017-10-16 02:45:43.859969 | primary | 2017-10-16 02:45:43.859 | 
self._wait_for_stack_status(**kwargs)
  2017-10-16 02:45:43.861455 | primary | 2017-10-16 02:45:43.861 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 368, in 
_wait_for_stack_status
  2017-10-16 02:45:43.862957 | primary | 2017-10-16 02:45:43.862 | 
fail_regexp):
  2017-10-16 02:45:43.864506 | primary | 2017-10-16 02:45:43.864 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 327, in 
_verify_status
  2017-10-16 02:45:43.866142 | primary | 2017-10-16 02:45:43.865 | 
stack_status_reason=stack.stack_status_reason)
  2017-10-16 02:45:43.867842 | primary | 2017-10-16 02:45:43.867 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
LoadBalancerv2Test-1022777367/f0a78a75-c1ed-4921-a7f7-c4028f3f60c3 is in 
UPDATE_FAILED status due to 'Resource UPDATE failed: ResourceInError: 
resources.loadbalancer: Went to status ERROR due to "Unknown"'
  2017-10-16 02:45:43.869183 | primary | 2017-10-16 02:45:43.868 |
  2017-10-16 02:45:43.870571 | primary | 2017-10-16 02:45:43.870 |
  2017-10-16 02:45:43.872501 | primary | 2017-10-16 02:45:43.872 | 
heat_integrationtests.scenario.test_autoscaling_lbv2.AutoscalingLoadBalancerv2Test.test_autoscaling_loadbalancer_neutron
  2017-10-16 02:45:43.874213 | primary | 2017-10-16 02:45:43.873 | 

  2017-10-16 02:45:43.875784 | primary | 2017-10-16 02:45:43.875 |
  2017-10-16 02:45:43.877352 | primary | 2017-10-16 02:45:43.876 | Captured 
traceback:
  2017-10-16 02:45:43.878767 | primary | 2017-10-16 02:45:43.878 | 
~~~
  2017-10-16 02:45:43.880302 | primary | 2017-10-16 02:45:43.879 | 
Traceback (most recent call last):
  2017-10-16 02:45:43.881941 | primary | 2017-10-16 02:45:43.881 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 97, in test_autoscaling_loadbalancer_neutron
  2017-10-16 02:45:43.883543 | primary | 2017-10-16 02:45:43.883 | 
self.check_num_responses(lb_url, 1)
  2017-10-16 02:45:43.884968 | primary | 2017-10-16 02:45:43.884 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 51, in check_num_responses
  2017-10-16 02:45:43.886354 | primary | 2017-10-16 02:45:43.885 | 
self.assertEqual(expected_num, len(resp))
  2017-10-16 02:45:43.887791 | primary | 2017-10-16 02:45:43.887