[Yahoo-eng-team] [Bug 1747848] [NEW] qcow2 format image uploaded to raw format, failed to start the virtual machine with ceph as backend storage

2018-02-06 Thread Brin Zhang
Public bug reported:

In the glance-api.conf configuration(Use ceph with backend storage):
[DEFAULT]
..
show_image_direct_url = True
show_multiple_locations = True
..

[glance_store]
filesystem_store_datadir = /opt/stack/data/glance/images/
default_store = rbd
stores = rbd
rbd_store_pool = images
rbd_store_user = admin
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
rbd_secret_uuid = 08bf86f1-09c0-4f03-90e6-ae361d520c57

If the qcow2 format image is uploaded as raw format, use this mirror launch 
virtual machine, to be completed when the virtual machine is created, open the 
virtual machine console.
In the console, you can see "No boot device", the virtual machine can not start 
from this image.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1747848

Title:
  qcow2 format image uploaded to raw format, failed to start the virtual
  machine with ceph as backend storage

Status in Glance:
  New

Bug description:
  In the glance-api.conf configuration(Use ceph with backend storage):
  [DEFAULT]
  ..
  show_image_direct_url = True
  show_multiple_locations = True
  ..

  [glance_store]
  filesystem_store_datadir = /opt/stack/data/glance/images/
  default_store = rbd
  stores = rbd
  rbd_store_pool = images
  rbd_store_user = admin
  rbd_store_ceph_conf = /etc/ceph/ceph.conf
  rbd_store_chunk_size = 8
  rbd_secret_uuid = 08bf86f1-09c0-4f03-90e6-ae361d520c57

  If the qcow2 format image is uploaded as raw format, use this mirror launch 
virtual machine, to be completed when the virtual machine is created, open the 
virtual machine console.
  In the console, you can see "No boot device", the virtual machine can not 
start from this image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1747848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747843] [NEW] User Settings > Change Password page contains unlocalized string "Use password for"

2018-02-06 Thread Yuko Katabami
Public bug reported:

On the "User Settings > Change Password" page, when you click on
"Current password", "New password" or "Confirm new password" field, a
drop down list starting with the text "Use password for:" is shown.
together with available usernames.

Even though the UI language is set to non-English, this is shown in English.
This should be localized.
The string is not currently in Zanata.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1747843

Title:
  User Settings > Change Password page contains unlocalized string "Use
  password for"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the "User Settings > Change Password" page, when you click on
  "Current password", "New password" or "Confirm new password" field, a
  drop down list starting with the text "Use password for:" is shown.
  together with available usernames.

  Even though the UI language is set to non-English, this is shown in English.
  This should be localized.
  The string is not currently in Zanata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1747843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747690] Re: master promotion: Failed to call refresh: glance-manage db_sync

2018-02-06 Thread Shilpa Devharakar
** Project changed: glance => tripleo

** Changed in: tripleo
Milestone: queens-rc1 => None

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Shilpa Devharakar (shilpasd)

** Changed in: tripleo
 Assignee: Shilpa Devharakar (shilpasd) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1747690

Title:
  master promotion: Failed to call refresh: glance-manage  db_sync

Status in Glance:
  New
Status in tripleo:
  Triaged

Bug description:
  periodic-tripleo-centos-7-master-containers-build

  undercloud install fails on glance_manage db_sync:

  https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-
  centos-7-master-containers-
  
build/e7c61cd/undercloud/home/jenkins/undercloud_install.log.txt.gz#_2018-02-06_14_23_33

  2018-02-06 14:23:33 | 2018-02-06 14:23:33,117 INFO: Error: 
/Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: Failed to call 
refresh: glance-manage  db_sync returned 1 instead of one of [0]
  2018-02-06 14:23:33 | 2018-02-06 14:23:33,118 INFO: Error: 
/Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: glance-manage  
db_sync returned 1 instead of one of [0]

  
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-centos-7-master-containers-build/e7c61cd/undercloud/var/log/extra/errors.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1747690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747690] Re: master promotion: Failed to call refresh: glance-manage db_sync

2018-02-06 Thread Shilpa Devharakar
** Project changed: tripleo => glance

** Changed in: glance
Milestone: queens-rc1 => None

** Changed in: glance
 Assignee: (unassigned) => Shilpa Devharakar (shilpasd)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1747690

Title:
  master promotion: Failed to call refresh: glance-manage  db_sync

Status in Glance:
  New
Status in tripleo:
  Triaged

Bug description:
  periodic-tripleo-centos-7-master-containers-build

  undercloud install fails on glance_manage db_sync:

  https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-
  centos-7-master-containers-
  
build/e7c61cd/undercloud/home/jenkins/undercloud_install.log.txt.gz#_2018-02-06_14_23_33

  2018-02-06 14:23:33 | 2018-02-06 14:23:33,117 INFO: Error: 
/Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: Failed to call 
refresh: glance-manage  db_sync returned 1 instead of one of [0]
  2018-02-06 14:23:33 | 2018-02-06 14:23:33,118 INFO: Error: 
/Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: glance-manage  
db_sync returned 1 instead of one of [0]

  
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-centos-7-master-containers-build/e7c61cd/undercloud/var/log/extra/errors.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1747690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747690] [NEW] master promotion: Failed to call refresh: glance-manage db_sync

2018-02-06 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

periodic-tripleo-centos-7-master-containers-build

undercloud install fails on glance_manage db_sync:

https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-centos-7
-master-containers-
build/e7c61cd/undercloud/home/jenkins/undercloud_install.log.txt.gz#_2018-02-06_14_23_33

2018-02-06 14:23:33 | 2018-02-06 14:23:33,117 INFO: Error: 
/Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: Failed to call 
refresh: glance-manage  db_sync returned 1 instead of one of [0]
2018-02-06 14:23:33 | 2018-02-06 14:23:33,118 INFO: Error: 
/Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: glance-manage  
db_sync returned 1 instead of one of [0]


https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-centos-7-master-containers-build/e7c61cd/undercloud/var/log/extra/errors.txt.gz

** Affects: glance
 Importance: Critical
 Status: Triaged


** Tags: alert ci promotion-blocker
-- 
master promotion: Failed to call refresh: glance-manage  db_sync
https://bugs.launchpad.net/bugs/1747690
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747003] Re: A bad _RC_CACHE can rarely cause unit tests to fail

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/540404
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=95ad6a2a9af8833baf69f025cad185e8bc857653
Submitter: Zuul
Branch:master

commit 95ad6a2a9af8833baf69f025cad185e8bc857653
Author: Chris Dent 
Date:   Fri Feb 2 14:40:44 2018 +

Reset the _RC_CACHE between tests

Very rarely the _RC_CACHE used for caching ResourceClass id and name
mappings will be wrong for tests, resulting in the find() method on
InventoryList returning None leading to inv.capacity calls failing.

This only showed up in the tests for
Iea182341f9419cb514a044f76864d6bec60a3683 where the order of tests
are changed because of a lot of renaming of modules.

Change-Id: Idc318d3914fa600deff613d8a43eadd9073fa262
Closes-Bug: #1747003


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747003

Title:
  A bad _RC_CACHE can rarely cause unit tests to fail

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Very rarely (so rarely in fact that it only seems to happen when test
  order is much different from the norm) some unit tests which encounter
  the resource_class_cache can fail as follows:

  http://logs.openstack.org/49/540049/2/check/openstack-tox-
  py27/176a6b3/testr_results.html.gz

  -=-=-
  ft1.1: 
nova.tests.unit.cmd.test_status.TestUpgradeCheckResourceProviders.test_check_resource_providers_no_compute_rps_one_compute_StringException:
 pythonlogging:'': {{{2018-02-02 11:30:00,443 WARNING [oslo_config.cfg] Config 
option key_manager.api_class  is deprecated. Use option key_manager.backend 
instead.}}}

  Traceback (most recent call last):
File "nova/tests/unit/cmd/test_status.py", line 588, in 
test_check_resource_providers_no_compute_rps_one_compute
  self._create_resource_provider(FAKE_IP_POOL_INVENTORY)
File "nova/tests/unit/cmd/test_status.py", line 561, in 
_create_resource_provider
  rp.set_inventory(inv_list)
File "nova/api/openstack/placement/objects/resource_provider.py", line 737, 
in set_inventory
  exceeded = _set_inventory(self._context, self, inv_list)
File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 986, in wrapper
  return fn(*args, **kwargs)
File "nova/api/openstack/placement/objects/resource_provider.py", line 372, 
in _set_inventory
  _add_inventory_to_provider(context, rp, inv_list, to_add)
File "nova/api/openstack/placement/objects/resource_provider.py", line 201, 
in _add_inventory_to_provider
  if inv_record.capacity <= 0:
  AttributeError: 'NoneType' object has no attribute 'capacity'
  -=-=-

  The find() method on InventoryList can return None if that cache is
  bad.

  This can be resolved (apparently) by resetting the _RC_CACHE between
  test runs in the same way that _TRAITS_SYNCED is reset, in
  nova/test.py:

  -# Reset the traits sync flag
  -objects.resource_provider._TRAITS_SYNCED = False
  +# Reset the traits sync flag and rc cache
  +resource_provider._TRAITS_SYNCED = False
  +resource_provider._RC_CACHE = None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1747003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745340] Re: Nova assumes that USB Host is present

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/538003
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b92e3bc95fd69759031b7439b7f8392926910cc2
Submitter: Zuul
Branch:master

commit b92e3bc95fd69759031b7439b7f8392926910cc2
Author: Marcin Juszkiewicz 
Date:   Thu Jan 25 20:00:37 2018 +0100

Make sure that we have usable input for graphical console

Graphical console is optional thing. But when it is enabled then it is
good to have some input devices.

On x86(-64) it is handled by PS/2 keyboard/mouse. On ppc64 we have USB
keyboard/mouse. On aarch64 we have nothing.

So make sure that we have USB Host controller available and that usb
keyboard is present. Also USB tablet (default pointer_model device) will
have a port to plug to.

Closes-bug: 1745340

Change-Id: I69a934d188446a1aa95ab33975dbe1d6e058ebf9


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1745340

Title:
  Nova assumes that USB Host is present

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I am working on getting OpenStack running on aarch64 architecture. And
  it is there. But I wanted to have graphical console like it is present
  on x86.

  Went though settings, enabled VNC/Spice and got "libvirtError:
  internal error: No free USB ports" message instead.

  Digged into code and it looks like Nova blindly assumes that USB host
  is present in VM instance as it just adds usbtablet device and starts
  an instance.

  But it should add USB host device first...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1745340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747562] Re: CPU topologies in nova - wrong link for "Manage Flavors"

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/541116
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=26de90a14d527ae18dd6cd39d4fdf589dd449fb6
Submitter: Zuul
Branch:master

commit 26de90a14d527ae18dd6cd39d4fdf589dd449fb6
Author: Yikun Jiang 
Date:   Tue Feb 6 11:36:38 2018 +0800

Fix wrong link for "Manage Flavors" in CPU topologies doc

The last sentence here where it links to "Manage Flavors"
is the wrong link. It goes here:
https://docs.openstack.org/nova/latest/admin/flavors.html which
doesn't talk about NUMA extra specs. It should be pointing at
the "NUMA topology" section of the flavor extra specs page:

https://docs.openstack.org/nova/latest/user/flavors.html#extra-specs-numa-topology

Change-Id: I30f6bc70afc5be00737cdf76e0e47bcb898a3a7f
Closes-Bug: #1747562


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747562

Title:
  CPU topologies in nova - wrong link for "Manage Flavors"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way:

  The last sentence here where it links to "Manage Flavors" is the wrong
  link:

  https://docs.openstack.org/nova/latest/admin/cpu-topologies.html
  #customizing-instance-numa-placement-policies

  It goes here https://docs.openstack.org/nova/latest/admin/flavors.html
  which doesn't talk about NUMA extra specs. It should be pointing at
  the "NUMA topology" section of the flavor extra specs page:

  
https://docs.openstack.org/nova/latest/user/flavors.html?highlight=numa%20topology
  #extra-specs

  
  ---
  Release: 17.0.0.0b4.dev154 on 2018-02-05 22:44
  SHA: 3ed7da92bbf7926f8511b6f4efff66c8cc00294c
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/cpu-topologies.rst
  URL: https://docs.openstack.org/nova/latest/admin/cpu-topologies.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1747562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745502] Re: novaclient hosts has been removed in version 10

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/538472
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=45442821d0864d89c78236af00abf6c8f0711053
Submitter: Zuul
Branch:master

commit 45442821d0864d89c78236af00abf6c8f0711053
Author: Akihiro Motoki 
Date:   Sun Jan 28 02:19:44 2018 +0900

Use nova os-services to retrieve host list

novaclient version 10 has dropped the support for novaclient.v2.hosts.
os-aggregates and migrateLive APIs expects host from os-services,
so this commit retrieves host list from os-services API.

Change-Id: I5ec007ab1f244ca8caee1eb7b5d4262ac6c32171
Closes-Bug: #1745502


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1745502

Title:
  novaclient hosts has been removed in version 10

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In version 10.0 of novaclient (the desired version for Queens) the
  support for novaclient.v2.hosts has been removed. There is an import
  statement in horizon to pull in that submodule. As such horizon won't
  work with 10.0 without resolving this.

  It appears the same functionality has been remapped. Changes to remap 
horizon's use are required.
  
https://review.openstack.org/#/c/459496/4/releasenotes/notes/microversion-v2_43-76db2ac463b431e4.yaml

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1745502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747437] Re: DHCP TestDeviceManager tests fail when IPv6 is not enabled on testing host

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/540868
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9bef065bd0f05ecb9a3aed8bd2f651809a55dfec
Submitter: Zuul
Branch:master

commit 9bef065bd0f05ecb9a3aed8bd2f651809a55dfec
Author: Maciej Józefczyk 
Date:   Mon Feb 5 15:14:43 2018 +0100

Mock ipv6_utils.is_enabled_and_bind_by_default method

We test DHCP agent DeviceManager without mocking method
ipv6_utils.is_enabled_and_bind_by_default(). Because of that
it fails all the time on hosts without IPv6 support.
This patch adds mock to prevent those failures.

Change-Id: Icb4854892839a20619e92852c8b1a317d71231da
Closes-Bug: #1747437


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747437

Title:
  DHCP TestDeviceManager tests fail when IPv6 is not enabled on testing
  host

Status in neutron:
  Fix Released

Bug description:
  When an instance has not enabled IPv6 listed tests are failing because
  of expected calls checks:

  
networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup
  
networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup_device_is_ready
  
networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup_ipv6

  
  Expected:
  [call.get_device_name({'fixed_ips': [{'subnet_id': 
'---', 'subnet': {'network_id': 
'12345678-1234-5678-1234567890ab', 'enable_dhcp': True, 'tenant_id': 
'---', 'ip_version': 4, 'id': 
'---', 'allocation_pools': {'start': '172.9.9.2', 
'id': '', 'end': '172.9.9.254'}, 'name': '', 'host_routes': [], 
'dns_nameservers': [], 'gateway_ip': '172.9.9.1', 'ipv6_address_mode': None, 
'cidr': '172.9.9.0/24', 'ipv6_ra_mode': None}, 'ip_address': '172.9.9.9'}], 
'device_id': 'dhcp-12345678-1234--1234567890ab', 'network_id': 
'12345678-1234-5678-1234567890ab', 'device_owner': '', 'mac_address': 
'aa:bb:cc:dd:ee:ff', 'id': '12345678-1234--1234567890ab', 
'allocation_pools': {'start': '172.9.9.2', 'id': '', 'end': '172.9.9.254'}}),
   call.configure_ipv6_ra('qdhcp-12345678-1234-5678-1234567890ab', 
'default', 0),
   call.plug('12345678-1234-5678-1234567890ab', 
'12345678-1234--1234567890ab', 'tap12345678-12', 'aa:bb:cc:dd:ee:ff', 
mtu=None, namespace='qdhcp-12345678-1234-5678-1234567890ab'),
   call.init_l3('tap12345678-12', ['172.9.9.9/24', '169.254.169.254/16'], 
namespace='qdhcp-12345678-1234-5678-1234567890ab')]

  
  Actual:
  [call.get_device_name({'fixed_ips': [{'subnet_id': 
'---', 'subnet': {'network_id': 
'12345678-1234-5678-1234567890ab', 'enable_dhcp': True, 'tenant_id': 
'---', 'ip_version': 4, 'id': 
'---', 'allocation_pools': {'start': '172.9.9.2', 
'id': '', 'end': '172.9.9.254'}, 'name': '', 'host_routes': [], 
'dns_nameservers': [], 'gateway_ip': '172.9.9.1', 'ipv6_address_mode': None, 
'cidr': '172.9.9.0/24', 'ipv6_ra_mode': None}, 'ip_address': '172.9.9.9'}], 
'device_id': 'dhcp-12345678-1234--1234567890ab', 'network_id': 
'12345678-1234-5678-1234567890ab', 'device_owner': '', 'mac_address': 
'aa:bb:cc:dd:ee:ff', 'id': '12345678-1234--1234567890ab', 
'allocation_pools': {'start': '172.9.9.2', 'id': '', 'end': '172.9.9.254'}}),
   call.plug('12345678-1234-5678-1234567890ab', 
'12345678-1234--1234567890ab', 'tap12345678-12', 'aa:bb:cc:dd:ee:ff', 
mtu=None, namespace='qdhcp-12345678-1234-5678-1234567890ab'),
   call.init_l3('tap12345678-12', ['172.9.9.9/24', '169.254.169.254/16'], 
namespace='qdhcp-12345678-1234-5678-1234567890ab')]


  The problem occurs because
  neutron.common.ipv6_utils.is_enabled_and_bind_by_default() is not
  mocked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1747437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1744160] Re: Change in iso8601 0.1.12 date format breaks parsing with py35

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/541142
Committed: 
https://git.openstack.org/cgit/openstack/oslo.utils/commit/?id=010fe3b1023871740b57dbc450f80e6c0c0f6e43
Submitter: Zuul
Branch:master

commit 010fe3b1023871740b57dbc450f80e6c0c0f6e43
Author: John L. Villalovos 
Date:   Mon Feb 5 22:29:38 2018 -0800

Fix breaking unit tests due to iso8601 changes

The move from iso8601===0.1.11 to iso8601===0.1.12 broke unit
tests in oslo.utils.

iso8601 used to do:
from datetime import datetime

But now they call datetime.datetime():
import datetime
datetime.datetime()

Unfortunately the unit tests that mocked datetime.datetime() are now
mocking the one in iso8601. This causes a failure in the unit tests.

Fix this by using the 'wraps' argument to mock. So that the calls will
get passed through to datetime.datetime. Also changed to using the
decorator style of mock.

In addition Python 3 unit tests were broken due to changing how the
UTC time zone is represented from 'UTC' to 'UTC+00:00'.

Closes-Bug: #1747575
Closes-Bug: #1744160
Change-Id: Ia80ffb5e571cc5366bef2bc1a32c457a3c16843d


** Changed in: oslo.utils
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1744160

Title:
  Change in iso8601 0.1.12 date format breaks parsing with py35

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.utils:
  Fix Released
Status in oslo.versionedobjects:
  Fix Released

Bug description:
  New package of iso8601 returns string in the format:

  '2012-02-14T20:53:07UTC+00:00'

  instead of:

  
  '2012-02-14T20:53:07Z'

  
  This is resulting in date string comparison failures and 
timeutils.parse_isotime errors with:

  ValueError: Unable to parse date string '2014-08-08T00:00:00UTC+00:00'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1744160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747332] Re: application credential cache is not invalidated

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/540324
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c6cfaadf5f0846c551f1331619a2b1e0d6823622
Submitter: Zuul
Branch:master

commit c6cfaadf5f0846c551f1331619a2b1e0d6823622
Author: wangxiyuan 
Date:   Mon Feb 5 10:37:06 2018 +0800

Add cache invalidation when delete application credential

When delete application credentials for a user/project, the
realted cache information should be invalidated as well.

Closes-Bug: #1747332
Change-Id: I431bf1921a636cce00a807f9d639628da8664c24


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1747332

Title:
  application credential cache is not invalidated

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  application credential cache is not invalidated when the related user
  is disabled/deleted or the related assignment is removed.

  reproduce:
  1. create an application credential.
  2. get it for caching.
  3. delete the related user (or disable it, or remove an assigned role)
  4. get it again.

  Expect: return 404 in step 4.
  Actual: still return the deleted application credential.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1747332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291157] Re: idp deletion should trigger token revocation

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/531915
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=f463bdccf130ad5e6bd2adb5fba785455477de00
Submitter: Zuul
Branch:master

commit f463bdccf130ad5e6bd2adb5fba785455477de00
Author: Lance Bragstad 
Date:   Mon Jan 8 22:03:50 2018 +

Validate identity providers during token validation

Previously, it was possible to validate a federated keystone token
after the identity provider associated by that token was deleted,
which is a security concern.

This commit does two things. First it makes it so that the token
cache is invalidated when identity providers are deleted. Second,
it validates the identity provider in the token data and ensures it
actually exists in the system before considering the token valid.

Change-Id: I57491c5a7d657b25cc436452acd7fcc4cd285839
Closes-Bug: 1291157


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1291157

Title:
  idp deletion should trigger token revocation

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When a federation IdP is deleted, the tokens that were issued (and
  still active) and associated with the IdP should be deleted. To
  prevent unwarranted access. The fix should delete any tokens that are
  associated with the idp, upon deletion (and possibly update, too).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1735407] Re: [Nova] Evacuation doesn't respect anti-affinity rules

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/525242
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=edeeaf9102eccb78f1a2555c7e18c3d706f07639
Submitter: Zuul
Branch:master

commit edeeaf9102eccb78f1a2555c7e18c3d706f07639
Author: Balazs Gibizer 
Date:   Mon Dec 4 16:18:30 2017 +0100

Add late server group policy check to rebuild

The affinity and anti-affinity server group policy is enforced by the
scheduler but two parallel scheduling could cause that such policy is
violated. During instance boot a late policy check was performed in
the compute manager to prevent this. This check was missing in case
of rebuild. Therefore two parallel evacuate command could cause that
the server group policy is violated. This patch introduces the late
policy check to rebuild to prevent such situation. When the violation
is detected during boot a re-scheduling happens. However the rebuild
action does not have the re-scheduling implementation so in this case
the rebuild will fail and the evacuation needs to be retried by the
user. Still this is better than allowing a parallel evacuation to
break the server group affinity policy.

To make the late policy check possible in the compute/manager the
rebuild_instance compute RPC call was extended with a request_spec
parameter.

Co-Authored-By: Richard Zsarnoczai 

Change-Id: I752617066bb2167b49239ab9d17b0c89754a3e12
Closes-Bug: #1735407


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1735407

Title:
  [Nova] Evacuation doesn't respect anti-affinity rules

Status in Mirantis OpenStack:
  Won't Fix
Status in Mirantis OpenStack 9.x series:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  --- Environment ---
  MOS: 9.2
  Nova: 13.1.1-7~u14.04+mos20
  3 compute nodes

  --- Steps to reproduce ---
  1. Create a new server group:
  nova server-group-create anti anti-affinity

  2. Launch 2 VMs in this server group:
  nova boot --image TestVM --flavor m1.tiny --nic 
net-id=889e4e01-9b38-4007-829d-b69d53269874 --hint 
group=def58398-4a00-4066-a2aa-13f1b6e7e327 vm-1
  nova boot --image TestVM --flavor m1.tiny --nic 
net-id=889e4e01-9b38-4007-829d-b69d53269874 --hint 
group=def58398-4a00-4066-a2aa-13f1b6e7e327 vm-2

  3. Stop nova-compute on the nodes where these 2 VMs are running:
  nova show vm-1 | grep "hypervisor"
  OS-EXT-SRV-ATTR:hypervisor_hostname  | node-12.domain.tld
  nova show vm-2 | grep "hypervisor"
  OS-EXT-SRV-ATTR:hypervisor_hostname  | node-13.domain.tld
  [root@node-12 ~]$ service nova-compute stop
  nova-compute stop/waiting
  [root@node-13 ~]$ service nova-compute stop
  nova-compute stop/waiting

  4. Evacuate both VMs almost at once:
  nova evacuate vm-1
  nova evacuate vm-2

  5. Check where these 2 VMs are running:
  nova show vm-1 | grep "hypervisor"
  nova show vm-2 | grep "hypervisor"

  --- Actual behavior ---
  Both VMs have been evacuated on the same node:
  [root@node-11 ~]$ virsh list
   IdName   State
  
   2 instance-0001  running
   3 instance-0002  running

  --- Expected behavior ---
  According to the anti-affinity rule, only 1 VM is evacuated.
  Another one failed to evacuate with the appropriate message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1735407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747562] Re: CPU topologies in nova - wrong link for "Manage Flavors"

2018-02-06 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747562

Title:
  CPU topologies in nova - wrong link for "Manage Flavors"

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way:

  The last sentence here where it links to "Manage Flavors" is the wrong
  link:

  https://docs.openstack.org/nova/latest/admin/cpu-topologies.html
  #customizing-instance-numa-placement-policies

  It goes here https://docs.openstack.org/nova/latest/admin/flavors.html
  which doesn't talk about NUMA extra specs. It should be pointing at
  the "NUMA topology" section of the flavor extra specs page:

  
https://docs.openstack.org/nova/latest/user/flavors.html?highlight=numa%20topology
  #extra-specs

  
  ---
  Release: 17.0.0.0b4.dev154 on 2018-02-05 22:44
  SHA: 3ed7da92bbf7926f8511b6f4efff66c8cc00294c
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/cpu-topologies.rst
  URL: https://docs.openstack.org/nova/latest/admin/cpu-topologies.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1747562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746609] Re: test_boot_server_from_encrypted_volume_luks cannot detach an encrypted StorPool-backed volume

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/539739
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cafe3d066ef7021c18961d4b239a10f61db23f2d
Submitter: Zuul
Branch:master

commit cafe3d066ef7021c18961d4b239a10f61db23f2d
Author: Matt Riedemann 
Date:   Wed Jan 31 19:06:46 2018 -0500

libvirt: fix native luks encryption failure to find volume_id

Not all volume types put a 'volume_id' entry in the
connection_info['data'] dict. This change uses a new
utility method to look up the volume_id in the connection_info
data dict and if not found there, uses the 'serial' value
from the connection_info, which we know at least gets set
when the DriverVolumeBlockDevice code attaches the volume.

This also has to update pre_live_migration since the connection_info
dict doesn't have a 'volume_id' key in it. It's unclear what
this code was expecting, or if it ever really worked, but since
an attached volume represented by a BlockDeviceMapping here has
a volume_id attribute, we can just use that. As that code path
was never tested, this updates related unit tests and refactors
the tests to actually use the type of DriverVolumeBlockDevice
objects that the ComputeManager would be sending down to the
driver pre_live_migration method. The hard-to-read squashed
dicts in the tests are also re-formatted so a human can actually
read them.

Change-Id: Ie02d298cd92d5b5ebcbbcd2b0e8be01f197bfafb
Closes-Bug: #1746609


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746609

Title:
  test_boot_server_from_encrypted_volume_luks cannot detach an encrypted
  StorPool-backed volume

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Hi,

  First of all, thanks a lot for working on Nova!

  The StorPool third-party Cinder CI has been failing on every test run
  today with the same problem: the
  test_boot_server_from_encrypted_volume_luks Tempest test fails when
  trying to detach a volume with an exception in the nova-compute
  service log: "Failed to detach volume 645fd643-89fc-4b3d-
  9ea5-59c764fc39a2 from /dev/vdb: AttributeError: 'NoneType' object has
  no attribute 'format_dom'"

  An example stack trace may be seen at:
  - nova-compute log: 
http://logs.ci-openstack.storpool.com/18/539318/1/check/dsvm-tempest-storpool/c3daf58/logs/screen-n-cpu.txt.gz#_Jan_31_18_07_27_971552
  - console log (with the list of tests run): 
http://logs.ci-openstack.storpool.com/18/539318/1/check/dsvm-tempest-storpool/c3daf58/console.html

  Actually, start from http://logs.ci-openstack.storpool.com/ - any of
  the recent five or six failures can be traced back to this problem.

  Of course, it is completely possible that the (recently merged)
  StorPool Nova volume attachment driver or the (also recently merged)
  StorPool os-brick connector is at fault; if there are any
  configuration fields or method parameters that we should be
  preserving, passing through, or handling in some other way, please let
  us know and we will modify our drivers.  Also, our CI system is
  available for testing any suggested patches or workarounds.

  Thanks in advance for looking at this, and thanks for your work on
  Nova and OpenStack in general!

  Best regards,
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701614] Re: Booting from an encrypted volume

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/540506
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=efb966ad649621b0dc051223c16a79fc0783f33c
Submitter: Zuul
Branch:master

commit efb966ad649621b0dc051223c16a79fc0783f33c
Author: Bruce Benjamin 
Date:   Fri Feb 2 15:11:58 2018 -0500

docs: Add booting from an encrypted volume

Now that the instructions for booting from a volume
have been migrated to nova, the instructions for
booting from an encrypted volume can be added as
well.

This commit adds instructions for how to import an
image into an encrypted volume.

Closes-Bug: 1701614

Change-Id: Ida4cf70a7e53fd37ceeadb5629e3221072219689


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1701614

Title:
  Booting from an encrypted volume

Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-manuals:
  Won't Fix

Bug description:
  The user guide currently has a section describing how to import
  a bootable image into a volume, but it doesn't include instructions
  on how to import an image into an encrypted volume and boot from it.

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [x] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 15.0.0 on 2017-06-30 05:27
  SHA: a1f1748478125ccd68d90a98ccc06c7ec359d3a0
  Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/source/cli-nova-launch-instance-from-volume.rst
  URL: 
https://docs.openstack.org/user-guide/cli-nova-launch-instance-from-volume.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1701614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582725] Re: cinder_policy.json action does not match the Cinder policy.json file

2018-02-06 Thread Seyeong Kim
** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Description changed:

+ [Impact]
+ cinder policies are not in horizon's policy.json
+ so unset tab "consistency groups" is enabled by default.
+ 
+ [Test Case]
+ 1. deploy simple openstack deployments via juju
+ 2. horizon -> volume -> check if there is consistency groups tab
+ 
+ [Regression]
+ after this patch, horizon needs to be restarted. so it is down shortly. this 
patch is actually config file changed ( and little source code ). so limited 
affection to behavior it self.
+ 
+ [Other]
+ 
+ related commit
+ 
+ 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=388708b251b0487bb22fb3ebb8fcb36ee4ffdc4f
+ 
+ [Original Description]
+ 
  The horizon/openstack_dashboard/conf/cinder_policy.json actions do not match 
the policy action that are used by the Cinder component.
  Cinder uses "volume_extension:volume_actions:upload_public"
  and Horizon policy.json and code uses "volume:upload_to_image"
  
  This is the only miss match of policy action between the 2 components.
  This also does not allow a user of Cinder and Horizon to update the
  Cinder policy.json and copy it to the Horizon directly and have the
  button function according to Cinder policy.json rules.
  
  This can be missed as the Cinder policy.json file is update and the
  Horizon file is updated.
  
  I think that the action that the Horizon code is using should match it
  component that it is supporting.

** Tags added: sts

** Tags added: sts-sru-needed

** Patch added: "lp1582725_xenial.debdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1582725/+attachment/5050163/+files/lp1582725_xenial.debdiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1582725

Title:
  cinder_policy.json action does not match the Cinder policy.json file

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  New

Bug description:
  [Impact]
  cinder policies are not in horizon's policy.json
  so unset tab "consistency groups" is enabled by default.

  Affected to Xenial, UCA_Mitaka

  
  [Test Case]
  1. deploy simple openstack deployments via juju
  2. horizon -> volume -> check if there is consistency groups tab

  [Regression]
  after this patch, horizon needs to be restarted. so it is down shortly. this 
patch is actually config file changed ( and little source code ). so limited 
affection to behavior it self.

  [Other]

  related commit

  
https://git.openstack.org/cgit/openstack/horizon/commit/?id=388708b251b0487bb22fb3ebb8fcb36ee4ffdc4f

  [Original Description]

  The horizon/openstack_dashboard/conf/cinder_policy.json actions do not match 
the policy action that are used by the Cinder component.
  Cinder uses "volume_extension:volume_actions:upload_public"
  and Horizon policy.json and code uses "volume:upload_to_image"

  This is the only miss match of policy action between the 2 components.
  This also does not allow a user of Cinder and Horizon to update the
  Cinder policy.json and copy it to the Horizon directly and have the
  button function according to Cinder policy.json rules.

  This can be missed as the Cinder policy.json file is update and the
  Horizon file is updated.

  I think that the action that the Horizon code is using should match it
  component that it is supporting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1582725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747781] [NEW] Cannot remove inherited role from user

2018-02-06 Thread Amelia Cordwell
Public bug reported:

If a user has an inherited role set on a project (eg inherits
'project_admin' from project_a to child project project_b), horizon
shows this as only a normal role on project_a.

If the user attempts to remove this role these errors display: 
Error: Failed to modify 2 project members, update project groups and update 
project quotas.
Error: Unable to modify project "demo".

Additionally a user cannot give a normal version of that role to the
user through horizon while they have the inherited role set.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1747781

Title:
  Cannot remove inherited role from user

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If a user has an inherited role set on a project (eg inherits
  'project_admin' from project_a to child project project_b), horizon
  shows this as only a normal role on project_a.

  If the user attempts to remove this role these errors display: 
  Error: Failed to modify 2 project members, update project groups and update 
project quotas.
  Error: Unable to modify project "demo".

  Additionally a user cannot give a normal version of that role to the
  user through horizon while they have the inherited role set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1747781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746558] Re: Make service all-cells min version helper use scatter-gather

2018-02-06 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/pike
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746558

Title:
  Make service all-cells min version helper use scatter-gather

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  Currently the "get_minimum_version_all_cells" function in service runs
  sequentially and this affects the performance in case of large
  deployments (running a lot of cells) :
  
https://github.com/openstack/nova/blob/stable/pike/nova/objects/service.py#L440

  So it would be nice to use the scatter_gather_all_cells function to do
  this operation in parallel.

  Also apart from the performance scaling point of view, in case
  connection to a particular cell fails, it would be nice to have
  sentinels returned which is done by the scatter_gather_all_cells. This
  helps when a cell is down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747747] [NEW] Making Payload consistent for all the operations of an object

2018-02-06 Thread Manjeet Singh Bhatia
Public bug reported:

I observed while implementing L3 flavor driver for OpenDaylight, the
payload sent to all the callbacks in a driver is not consistent, which
in some cases is an oslo versioned object and sometimes a dictionary,
due to this driver always have to do tweaks and hacks for driver and
test code. It would make more sense to have one type of payload passed
for all the operations of an object, which will help achieving some
stability and simplicity.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747747

Title:
  Making Payload consistent for all the operations of an object

Status in neutron:
  New

Bug description:
  I observed while implementing L3 flavor driver for OpenDaylight, the
  payload sent to all the callbacks in a driver is not consistent, which
  in some cases is an oslo versioned object and sometimes a dictionary,
  due to this driver always have to do tweaks and hacks for driver and
  test code. It would make more sense to have one type of payload passed
  for all the operations of an object, which will help achieving some
  stability and simplicity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1747747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733368] Re: extra_dhcp_opt extension not properly documented in api-ref

2018-02-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/539160
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=615e88c328016c38f77540f7cf392a1d27a7b663
Submitter: Zuul
Branch:master

commit 615e88c328016c38f77540f7cf392a1d27a7b663
Author: Michal Kelner Mishali 
Date:   Tue Jan 30 11:14:25 2018 +0200

Document extra_dhcp_opt extension in api-ref

Adding a subsection for the extra_dhcp_opt extension
to ports documentation.

Closes-Bug: #1733368

Change-Id: I5416ff3690c2e0263dede56f2cd4779720433400
Signed-off-by: Michal Kelner Mishali 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733368

Title:
  extra_dhcp_opt extension not properly documented in api-ref

Status in neutron:
  Fix Released

Bug description:
  The extra_dhcp_opt is not properly doc'd in our api-ref. While the
  ports api-ref does doc the extra_dhcp_opts request/response param, the
  extra_dhcp_opt extension needs to be described in a subnsection atop
  to the ports api-ref like others do.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747727] [NEW] Unit test test_download_service_unavailable fails behind proxy

2018-02-06 Thread Paul Bourke
Public bug reported:

A patch was submitted some time back to allow some tests to run behind a
http proxy [0]. It's unclear to me why '0.0.0.0:1' was used rather than
something like '127.0.0.1' which will be commonly in an environment's
$no_proxy variable, whereas the former will not.

Unless there's some special reason this value was used I propose
switching instances of 0.0.0.0:1 with 127.0.0.1

[0] https://review.openstack.org/#/c/316965/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1747727

Title:
  Unit test test_download_service_unavailable fails behind proxy

Status in Glance:
  New

Bug description:
  A patch was submitted some time back to allow some tests to run behind
  a http proxy [0]. It's unclear to me why '0.0.0.0:1' was used rather
  than something like '127.0.0.1' which will be commonly in an
  environment's $no_proxy variable, whereas the former will not.

  Unless there's some special reason this value was used I propose
  switching instances of 0.0.0.0:1 with 127.0.0.1

  [0] https://review.openstack.org/#/c/316965/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1747727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747709] [NEW] neutron-tempest-ovsfw fails 100% times

2018-02-06 Thread Slawek Kaplonski
Public bug reported:

2 or 3 tests are failing in neutron-tempest-ovsfw job. Example of failed
job: http://logs.openstack.org/54/537654/4/check/neutron-tempest-
ovsfw/5c90b2b/logs/testr_results.html.gz

>From ovs L2 agent logs it looks that there is quite many issues related
to ovsfw: http://logs.openstack.org/54/537654/4/check/neutron-tempest-
ovsfw/5c90b2b/logs/screen-q-agt.txt.gz?level=ERROR

Errors are like:

Feb 05 21:23:55.740789 ubuntu-xenial-inap-mtl01-0002377325 
neutron-openvswitch-agent[20990]: ERROR 
neutron.agent.linux.openvswitch_firewall.firewall [None 
req-bdd502f0-2b26-4566-b0e5-ee77d4d939ae None None] Initializing unfiltered 
port 7f46ff5c-5c9f-462f-8348-ef8741a9194d that does not exist in ovsdb: Port 
7f46ff5c-5c9f-462f-8348-ef8741a9194d is not managed by this agent..: 
OVSFWPortNotFound: Port 7f46ff5c-5c9f-462f-8348-ef8741a9194d is not managed by 
this agent.
Feb 05 21:24:09.731637 ubuntu-xenial-inap-mtl01-0002377325 
neutron-openvswitch-agent[20990]: ERROR 
neutron.agent.linux.openvswitch_firewall.firewall [None 
req-bdd502f0-2b26-4566-b0e5-ee77d4d939ae None None] Initializing unfiltered 
port b191da42-6eca-4ba3-a12b-244b07e6fe45 that does not exist in ovsdb: Port 
b191da42-6eca-4ba3-a12b-244b07e6fe45 is not managed by this agent..: 
OVSFWPortNotFound: Port b191da42-6eca-4ba3-a12b-244b07e6fe45 is not managed by 
this agent.

** Affects: neutron
 Importance: High
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: ovs-fw

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747709

Title:
  neutron-tempest-ovsfw fails 100% times

Status in neutron:
  New

Bug description:
  2 or 3 tests are failing in neutron-tempest-ovsfw job. Example of
  failed job: http://logs.openstack.org/54/537654/4/check/neutron-
  tempest-ovsfw/5c90b2b/logs/testr_results.html.gz

  From ovs L2 agent logs it looks that there is quite many issues
  related to ovsfw: http://logs.openstack.org/54/537654/4/check/neutron-
  tempest-ovsfw/5c90b2b/logs/screen-q-agt.txt.gz?level=ERROR

  Errors are like:

  Feb 05 21:23:55.740789 ubuntu-xenial-inap-mtl01-0002377325 
neutron-openvswitch-agent[20990]: ERROR 
neutron.agent.linux.openvswitch_firewall.firewall [None 
req-bdd502f0-2b26-4566-b0e5-ee77d4d939ae None None] Initializing unfiltered 
port 7f46ff5c-5c9f-462f-8348-ef8741a9194d that does not exist in ovsdb: Port 
7f46ff5c-5c9f-462f-8348-ef8741a9194d is not managed by this agent..: 
OVSFWPortNotFound: Port 7f46ff5c-5c9f-462f-8348-ef8741a9194d is not managed by 
this agent.
  Feb 05 21:24:09.731637 ubuntu-xenial-inap-mtl01-0002377325 
neutron-openvswitch-agent[20990]: ERROR 
neutron.agent.linux.openvswitch_firewall.firewall [None 
req-bdd502f0-2b26-4566-b0e5-ee77d4d939ae None None] Initializing unfiltered 
port b191da42-6eca-4ba3-a12b-244b07e6fe45 that does not exist in ovsdb: Port 
b191da42-6eca-4ba3-a12b-244b07e6fe45 is not managed by this agent..: 
OVSFWPortNotFound: Port b191da42-6eca-4ba3-a12b-244b07e6fe45 is not managed by 
this agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1747709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747705] [NEW] "ssh_pwauth" always true on CloudStack datasource with password

2018-02-06 Thread shota.a
Public bug reported:

ssh_pwauth is set forcefully to true when
- Using CloudStack datasource
- Using VM template supports password reset feature.

When cloud-init obtain password from virtual router, cloud-init set ssh_pwauth 
to true nevertheless originall ssh_pwauth value is No/unchanged.
I read the code and found this behavior is in 
https://github.com/cloud-init/cloud-init/blob/master/cloudinit/sources/DataSourceCloudStack.py#L148
 .

I'd like to use password for only virtual console and forbid ssh password 
authentication.
The easiest solution is remove the code 
https://github.com/cloud-init/cloud-init/blob/master/cloudinit/sources/DataSourceCloudStack.py#L150
 but I'm not sure about side effect.

How do you think this?
The version of cloud-init is 17.1-46-g7acc9e68-0ubuntu1~16.04.1, on ubuntu 
16.04.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1747705

Title:
  "ssh_pwauth" always true on CloudStack datasource with password

Status in cloud-init:
  New

Bug description:
  ssh_pwauth is set forcefully to true when
  - Using CloudStack datasource
  - Using VM template supports password reset feature.

  When cloud-init obtain password from virtual router, cloud-init set 
ssh_pwauth to true nevertheless originall ssh_pwauth value is No/unchanged.
  I read the code and found this behavior is in 
https://github.com/cloud-init/cloud-init/blob/master/cloudinit/sources/DataSourceCloudStack.py#L148
 .

  I'd like to use password for only virtual console and forbid ssh password 
authentication.
  The easiest solution is remove the code 
https://github.com/cloud-init/cloud-init/blob/master/cloudinit/sources/DataSourceCloudStack.py#L150
 but I'm not sure about side effect.

  How do you think this?
  The version of cloud-init is 17.1-46-g7acc9e68-0ubuntu1~16.04.1, on ubuntu 
16.04.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1747705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747693] [NEW] boot from volume using source type blank/image/snapshot still does legacy style attach

2018-02-06 Thread Matt Riedemann
Public bug reported:

As part of this blueprint in queens:

https://specs.openstack.org/openstack/nova-specs/specs/queens/approved
/cinder-new-attach-apis.html

We now want to create new volume attachments using the cinder 3.44 API.

However, when booting from volume using a source_type of
blank/image/snapshot, where nova-compute creates the volume and then
attaches it, it still goes down the legacy attach flow because a volume
attachment record is never created and stored on the
BlockDeviceMapping.attachment_id field.

There are even TODOs in the code for where this needs to be fixed:

https://github.com/openstack/nova/blob/2c1874a0ecdd1b5ce7670cdfc42396e90e3a55aa/nova/virt/block_device.py#L687

https://github.com/openstack/nova/blob/2c1874a0ecdd1b5ce7670cdfc42396e90e3a55aa/nova/virt/block_device.py#L712

https://github.com/openstack/nova/blob/2c1874a0ecdd1b5ce7670cdfc42396e90e3a55aa/nova/virt/block_device.py#L736

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: volumes

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747693

Title:
  boot from volume using source type blank/image/snapshot still does
  legacy style attach

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  As part of this blueprint in queens:

  https://specs.openstack.org/openstack/nova-specs/specs/queens/approved
  /cinder-new-attach-apis.html

  We now want to create new volume attachments using the cinder 3.44
  API.

  However, when booting from volume using a source_type of
  blank/image/snapshot, where nova-compute creates the volume and then
  attaches it, it still goes down the legacy attach flow because a
  volume attachment record is never created and stored on the
  BlockDeviceMapping.attachment_id field.

  There are even TODOs in the code for where this needs to be fixed:

  
https://github.com/openstack/nova/blob/2c1874a0ecdd1b5ce7670cdfc42396e90e3a55aa/nova/virt/block_device.py#L687

  
https://github.com/openstack/nova/blob/2c1874a0ecdd1b5ce7670cdfc42396e90e3a55aa/nova/virt/block_device.py#L712

  
https://github.com/openstack/nova/blob/2c1874a0ecdd1b5ce7670cdfc42396e90e3a55aa/nova/virt/block_device.py#L736

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1747693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747694] [NEW] Trust documentation lists support for paging

2018-02-06 Thread Lance Bragstad
Public bug reported:

The API reference for listing trusts declares support for ``page`` and
``per_page`` query parameters [0], but those don't seem to be supported:

http://paste.openstack.org/show/663481/

We should either update the documentation or include support for paging
trusts so that the document is accurate. Since keystone doesn't really
support paging elsewhere, I think it would be acceptable to remove the
``page`` and ``per_page`` query parameters from the trust API reference.

[0] https://developer.openstack.org/api-ref/identity/v3-ext/index.html
#os-trust-api

** Affects: keystone
 Importance: Medium
 Status: Confirmed


** Tags: documentation low-hanging-fruit office-hours

** Tags added: documentation office-hours

** Changed in: keystone
Milestone: None => queens-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1747694

Title:
  Trust documentation lists support for paging

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  The API reference for listing trusts declares support for ``page`` and
  ``per_page`` query parameters [0], but those don't seem to be
  supported:

  http://paste.openstack.org/show/663481/

  We should either update the documentation or include support for
  paging trusts so that the document is accurate. Since keystone doesn't
  really support paging elsewhere, I think it would be acceptable to
  remove the ``page`` and ``per_page`` query parameters from the trust
  API reference.

  [0] https://developer.openstack.org/api-ref/identity/v3-ext/index.html
  #os-trust-api

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1747694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741319] Re: arm64: Migration pre-check error: CPU doesn't have compatibility.

2018-02-06 Thread Ryan Beisner
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741319

Title:
  arm64: Migration pre-check error: CPU doesn't have compatibility.

Status in OpenStack nova-compute charm:
  Incomplete
Status in OpenStack Compute (nova):
  New

Bug description:
  Pike/openstack-base running on identical servers (HiSilicon D05):

  ubuntu@ike-hisi-maas:~$ openstack server migrate --live strong-emu dannf
  Migration pre-check error: CPU doesn't have compatibility.

  XML error: Missing CPU model name

  Refer to http://libvirt.org/html/libvirt-libvirt-
  host.html#virCPUCompareResult (HTTP 400) (Request-ID: req-
  c5ec9320-d111-40b7-af0e-d8414df3925c)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1741319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529534] Re: User new log style where Logger.exception() is used by passing an exception object as the first argument.

2018-02-06 Thread Zhao Chao
I think we can set this to 'Invalid', as troveclient does not use
olso_log.

** Changed in: python-troveclient
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1529534

Title:
  User new log style where Logger.exception() is used by passing an
  exception object as the first argument.

Status in Cinder:
  Invalid
Status in Magnum:
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed
Status in Glance Client:
  New
Status in python-neutronclient:
  Confirmed
Status in python-troveclient:
  Invalid
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  Use new log style where Logger.exception() is used by passing an
  exception object as the first argument[1].

  [1]http://docs.openstack.org/developer/oslo.log/usage.html#no-more-
  implicit-conversion-to-unicode-str

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1529534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747665] [NEW] Unknown column 'cn.mapped' in 'field list' after upgrade from Ocata to Pike

2018-02-06 Thread jiri
Public bug reported:

Hello I upgraded from Ocata to Pike and now I see the following in scheduler:
I did nova-manage db sync, api_db sync and online migrations.
Am I missing something?

2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
[req-6065c009-2056-498e-9b71-5a27daca7092 - - - - -] DBAPIError exception 
wrapped from (pymysql.err.InternalError) (1054, u"Unknown column 'cn.mapped' in 
'field list'") [SQL: u'SELECT cn.created_at, cn.updated_at, cn.deleted_at, 
cn.deleted, cn.id, cn.service_id, cn.host, cn.uuid, cn.vcpus, cn.memory_mb, 
cn.local_gb, cn.vcpus_used, cn.memory_mb_used, cn.local_gb_used, 
cn.hypervisor_type, cn.hypervisor_version, cn.hypervisor_hostname, 
cn.free_ram_mb, cn.free_disk_gb, cn.current_workload, cn.running_vms, 
cn.cpu_info, cn.disk_available_least, cn.host_ip, cn.supported_instances, 
cn.metrics, cn.pci_stats, cn.extra_resources, cn.stats, cn.numa_topology, 
cn.ram_allocation_ratio, cn.cpu_allocation_ratio, cn.disk_allocation_ratio, 
cn.mapped \nFROM compute_nodes AS cn \nWHERE cn.deleted = %(deleted_1)s ORDER 
BY cn.id ASC'] [parameters: {u'deleted_1': 0}]: InternalError: (1054, u"Unknown 
column 'cn.mapped' in 'field list'")
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, in 
_execute_context
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters context)
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 450, in 
do_execute
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 167, in execute
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 323, in _query
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 836, in query
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1020, in 
_read_query_result
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1303, in read
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 982, in 
_read_packet
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 394, in 
check_error
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/err.py", line 120, in 
raise_mysql_exception
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
_check_mysql_exception(errinfo)
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/err.py", line 115, in 
_check_mysql_exception
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters raise 
InternalError(errno, errorvalue)
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters 
InternalError: (1054, u"Unknown column 'cn.mapped' in 'field list'")
2018-02-06 12:39:51.654 32708 ERROR oslo_db.sqlalchemy.exc_filters

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747665

Title:
  Unknown column 'cn.mapped' in 'field list' after upgrade from Ocata to
  Pike

Status in OpenStack Compute (nova):
  New

Bug description:
  Hello I upgraded from Ocata to Pike and now I see the following in scheduler:
  I did nova-manage db sync, api_db sync and online migrations.
  Am I missing something?

  2018-02-06 12:39:51.654 32708 

[Yahoo-eng-team] [Bug 1747654] [NEW] [RFE] VPNaaS: enable sha384/sha512 auth algorithms for *Swan drivers

2018-02-06 Thread Hunt Xu
Public bug reported:

When adding sha384 and sha512 auth algorithms for vendor drivers(bug
#1638152), the commit message said "Openswan, Strongswan, Libreswan and
Cisco CSR driver doesn't support" sha384 and sha512 as auth algorithms.
However, after some research, all the *Swan drivers do support these two
algorithms. So it is better to enable sha384/sha512 with *Swan drivers
for security improvements.

- For StrongSwan, wiki pages back in Mid 2014: [1][2].
- For LibreSwan, wiki page back in May 2016: [3].
- For OpenSwan, it is not well documented. However, the code last changed in 
Jan 2014 shows its awareness of these two algorithms: [4]

[1]. 
https://wiki.strongswan.org/projects/strongswan/wiki/IKEv1CipherSuites/16#Integrity-Algorithms
[2]. 
https://wiki.strongswan.org/projects/strongswan/wiki/IKEv2CipherSuites/35#Integrity-Algorithms
[3]. 
https://libreswan.org/wiki/index.php?title=FAQ=20707#Which_ciphers_.2F_algorithms_does_libreswan_support.3F
[4]. 
https://github.com/xelerance/Openswan/blob/master/lib/libopenswan/alg_info.c

** Affects: neutron
 Importance: Undecided
 Assignee: Hunt Xu (huntxu)
 Status: New


** Tags: vpnaas

** Description changed:

  When adding sha384 and sha512 auth algorithms for vendor drivers(bug
  #1638152), the commit message said "Openswan, Strongswan, Libreswan and
  Cisco CSR driver doesn't support" sha384 and sha512 as auth algorithms.
  However, after some research, all the *Swan drivers do support these two
  algorithms. So it is better to enable sha384/sha512 with *Swan drivers
  for security improvements.
  
- For StrongSwan, wiki pages back in Mid 2014: [1][2].
- For LibreSwan, wiki page back in May 2016: [3].
- For OpenSwan, it is not well documented. However, the code last changed in 
Jan 2014 shows its awareness of these two algorithms: [4]
+ - For StrongSwan, wiki pages back in Mid 2014: [1][2].
+ - For LibreSwan, wiki page back in May 2016: [3].
+ - For OpenSwan, it is not well documented. However, the code last changed in 
Jan 2014 shows its awareness of these two algorithms: [4]
  
  [1]. 
https://wiki.strongswan.org/projects/strongswan/wiki/IKEv1CipherSuites/16#Integrity-Algorithms
  [2]. 
https://wiki.strongswan.org/projects/strongswan/wiki/IKEv2CipherSuites/35#Integrity-Algorithms
  [3]. 
https://libreswan.org/wiki/index.php?title=FAQ=20707#Which_ciphers_.2F_algorithms_does_libreswan_support.3F
  [4]. 
https://github.com/xelerance/Openswan/blob/master/lib/libopenswan/alg_info.c

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747654

Title:
  [RFE] VPNaaS: enable sha384/sha512 auth algorithms for  *Swan drivers

Status in neutron:
  New

Bug description:
  When adding sha384 and sha512 auth algorithms for vendor drivers(bug
  #1638152), the commit message said "Openswan, Strongswan, Libreswan
  and Cisco CSR driver doesn't support" sha384 and sha512 as auth
  algorithms. However, after some research, all the *Swan drivers do
  support these two algorithms. So it is better to enable sha384/sha512
  with *Swan drivers for security improvements.

  - For StrongSwan, wiki pages back in Mid 2014: [1][2].
  - For LibreSwan, wiki page back in May 2016: [3].
  - For OpenSwan, it is not well documented. However, the code last changed in 
Jan 2014 shows its awareness of these two algorithms: [4]

  [1]. 
https://wiki.strongswan.org/projects/strongswan/wiki/IKEv1CipherSuites/16#Integrity-Algorithms
  [2]. 
https://wiki.strongswan.org/projects/strongswan/wiki/IKEv2CipherSuites/35#Integrity-Algorithms
  [3]. 
https://libreswan.org/wiki/index.php?title=FAQ=20707#Which_ciphers_.2F_algorithms_does_libreswan_support.3F
  [4]. 
https://github.com/xelerance/Openswan/blob/master/lib/libopenswan/alg_info.c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1747654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2018-02-06 Thread Erno Kuvaja
** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  New
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in masakari:
  Fix Released
Status in Mistral:
  Invalid
Status in Monasca:
  New
Status in networking-arista:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  Fix Committed
Status in networking-ofagent:
  Fix Committed
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  New
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in Rally:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Committed
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747650] [NEW] Make bdms querying in multiple cells use scatter-gather

2018-02-06 Thread Surya Seetharaman
Public bug reported:

Currently the "_get_instance_bdms_in_multiple_cells" function in
extended_volumes runs sequentially and this affects the performance in
case of large deployments (running a lot of cells) :
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/extended_volumes.py#L50

So it would be nice to use the scatter_gather_cells function to do this
operation in parallel.

Also apart from the performance scaling point of view, in case
connection to a particular cell fails, it would be nice to have
sentinels returned which is done by the scatter_gather_cells function.
This helps when a cell is down.

** Affects: nova
 Importance: Undecided
 Assignee: Surya Seetharaman (tssurya)
 Status: New


** Tags: cells

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747650

Title:
  Make bdms querying in multiple cells use scatter-gather

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently the "_get_instance_bdms_in_multiple_cells" function in
  extended_volumes runs sequentially and this affects the performance in
  case of large deployments (running a lot of cells) :
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/extended_volumes.py#L50

  So it would be nice to use the scatter_gather_cells function to do
  this operation in parallel.

  Also apart from the performance scaling point of view, in case
  connection to a particular cell fails, it would be nice to have
  sentinels returned which is done by the scatter_gather_cells function.
  This helps when a cell is down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1747650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747622] Re: Aggregate info in nova_scheduler lose some host when add some host in aggregate continuousely

2018-02-06 Thread Sylvain Bauza
What do you mean by "adding hosts continuously" ? You mean issuing those 
commands at once ?
Looks to me that 
https://github.com/openstack/nova/blob/0258cecaca88d4a305e99c5a17e2230361ef1235/nova/compute/api.py#L5050-L5062
 could be racy if we have multiple API workers that fetch simultaneously the 
aggregate information and try to update it.

We could make that more resilient and adding more distributed locking
mechanism, but since the aggregates API is admin-only (and adding a host
is something not done often - in comparison to an end-user API call for
example), I leave the question open whether the solution complications
would overcome the benefits.

** Tags added: sched

** Tags removed: sched
** Tags added: availability-zones openstack-version.pike

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747622

Title:
  Aggregate info in nova_scheduler lose some host when add some host in
  aggregate continuousely

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description
  ===
  If add some host to an availability_zone continuously, nova_scheduler's 
aggs_by_id and host_aggregates_map may be lost some host aggregate data. Then 
create instance in this availability_zone will not select those lost host every 
time.

  Steps to reproduce
  ==
  1.create an availability_zone.
  nova aggregate-create test3 test3

  2.add host to this availability_zone continuously.
  nova aggregate-add-host 51 Computer0102
  nova aggregate-add-host 51 Computer0103
  nova aggregate-add-host 51 Computer0116

  3.create instances in this availability_zone.

  Expected result
  ===
  Instances can select Computer0102, Computer0103 and Computer0116.

  Actual result
  =
  Instance never select Computer0103.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
  Pike

  2. Which hypervisor did you use?
  Libvirt + KVM

  Logs & Configs
  =
  I add some log in nova-scheduler's host_manager, find aggregate lose 
information in nova-api when add host continuously.

  [root@Controller01 ~]# cat /var/log/nova/nova-scheduler.log | grep hanrong 
|grep update_aggregates
  2018-02-06 11:02:43.412 38000 INFO nova.scheduler.host_manager 
[req-69cb0a45-96f9-4693-91e8-46aeaec4ff54 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=[],id=51,metadata={},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]
  2018-02-06 11:02:52.187 38000 INFO nova.scheduler.host_manager 
[req-b0582d0d-59fd-4a58-85ab-ab13116bec40 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=['Computer0102'],id=51,metadata={availability_zone='test3'},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]
  2018-02-06 11:02:52.239 38000 INFO nova.scheduler.host_manager 
[req-eae376aa-f725-4b87-8740-df58c0bb25de 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=['Computer0102','Computer0103'],id=51,metadata={availability_zone='test3'},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]
  2018-02-06 11:02:52.247 38000 INFO nova.scheduler.host_manager 
[req-22a5740f-6560-4603-8904-509b39335a76 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=['Computer0102','Computer0116'],id=51,metadata={availability_zone='test3'},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1747622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747622] [NEW] Aggregate info in nova_scheduler lose some host when add some host in aggregate continuousely

2018-02-06 Thread jiangyuhao
Public bug reported:

Description
===
If add some host to an availability_zone continuously, nova_scheduler's 
aggs_by_id and host_aggregates_map may be lost some host aggregate data. Then 
create instance in this availability_zone will not select those lost host every 
time.

Steps to reproduce
==
1.create an availability_zone.
nova aggregate-create test3 test3

2.add host to this availability_zone continuously.
nova aggregate-add-host 51 Computer0102
nova aggregate-add-host 51 Computer0103
nova aggregate-add-host 51 Computer0116

3.create instances in this availability_zone.

Expected result
===
Instances can select Computer0102, Computer0103 and Computer0116.

Actual result
=
Instance never select Computer0103.

Environment
===
1. Exact version of OpenStack you are running. See the following
Pike

2. Which hypervisor did you use?
Libvirt + KVM

Logs & Configs
=
I add some log in nova-scheduler's host_manager, find aggregate lose 
information in nova-api when add host continuously.

[root@Controller01 ~]# cat /var/log/nova/nova-scheduler.log | grep hanrong 
|grep update_aggregates
2018-02-06 11:02:43.412 38000 INFO nova.scheduler.host_manager 
[req-69cb0a45-96f9-4693-91e8-46aeaec4ff54 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=[],id=51,metadata={},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]
2018-02-06 11:02:52.187 38000 INFO nova.scheduler.host_manager 
[req-b0582d0d-59fd-4a58-85ab-ab13116bec40 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=['Computer0102'],id=51,metadata={availability_zone='test3'},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]
2018-02-06 11:02:52.239 38000 INFO nova.scheduler.host_manager 
[req-eae376aa-f725-4b87-8740-df58c0bb25de 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=['Computer0102','Computer0103'],id=51,metadata={availability_zone='test3'},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]
2018-02-06 11:02:52.247 38000 INFO nova.scheduler.host_manager 
[req-22a5740f-6560-4603-8904-509b39335a76 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=['Computer0102','Computer0116'],id=51,metadata={availability_zone='test3'},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747622

Title:
  Aggregate info in nova_scheduler lose some host when add some host in
  aggregate continuousely

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  If add some host to an availability_zone continuously, nova_scheduler's 
aggs_by_id and host_aggregates_map may be lost some host aggregate data. Then 
create instance in this availability_zone will not select those lost host every 
time.

  Steps to reproduce
  ==
  1.create an availability_zone.
  nova aggregate-create test3 test3

  2.add host to this availability_zone continuously.
  nova aggregate-add-host 51 Computer0102
  nova aggregate-add-host 51 Computer0103
  nova aggregate-add-host 51 Computer0116

  3.create instances in this availability_zone.

  Expected result
  ===
  Instances can select Computer0102, Computer0103 and Computer0116.

  Actual result
  =
  Instance never select Computer0103.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
  Pike

  2. Which hypervisor did you use?
  Libvirt + KVM

  Logs & Configs
  =
  I add some log in nova-scheduler's host_manager, find aggregate lose 
information in nova-api when add host continuously.

  [root@Controller01 ~]# cat /var/log/nova/nova-scheduler.log | grep hanrong 
|grep update_aggregates
  2018-02-06 11:02:43.412 38000 INFO nova.scheduler.host_manager 
[req-69cb0a45-96f9-4693-91e8-46aeaec4ff54 9974cc9acecb40f3827c3b27e803e87c 
04c742a5ce41488494f5b0d587a9bd32 - default default] hanrong update_aggregates: 
[Aggregate(created_at=2018-02-06T03:02:43Z,deleted=False,deleted_at=None,hosts=[],id=51,metadata={},name='test3',updated_at=None,uuid=5a43dd9c-aa85-4c29-a4ce-bbf58af1c150)]
  2018-02-06 11:02:52.187 38000 INFO nova.scheduler.host_manager 

[Yahoo-eng-team] [Bug 1747600] [NEW] Get auto from context for Neutron endpoint

2018-02-06 Thread Zhenyu Zheng
Public bug reported:

TBA

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747600

Title:
  Get auto from context for Neutron endpoint

Status in OpenStack Compute (nova):
  New

Bug description:
  TBA

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1747600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp