[Yahoo-eng-team] [Bug 1772384] Re: Huge pages on compute node

2018-07-20 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1772384

Title:
  Huge pages on compute node

Status in OpenStack Compute (nova):
  Expired

Bug description:
  There is no step to regenerate grub.cfg file. Is it OK to have changes
  only in /etc/grub/default? It not reflecting for me after reboot.
  openstack release: liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1772384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782607] Re: nova-lvm job failing on new tempest test test_resize_server_revert_with_volume_attached

2018-07-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/584018
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c9589ed9509f3ac2188a5a03d9332d29a8e8fbaa
Submitter: Zuul
Branch:master

commit c9589ed9509f3ac2188a5a03d9332d29a8e8fbaa
Author: Matt Riedemann 
Date:   Thu Jul 19 12:22:18 2018 -0400

Skip test_resize_server_revert_with_volume_attached in nova-lvm

The libvirt driver doesn't support resize for lvm-backed instances
so we need to skip this test to get the nova-lvm job to pass again.

Change-Id: Id752b539babadd187b4c999039cc4ca655437d47
Closes-Bug: #1782607


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782607

Title:
  nova-lvm job failing on new tempest test
  test_resize_server_revert_with_volume_attached

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/70/434870/37/check/nova-
  lvm/a7cce3d/logs/testr_results.html.gz

  http://logs.openstack.org/70/434870/37/check/nova-
  lvm/a7cce3d/logs/screen-n-cpu.txt.gz?level=TRACE#_Jul_19_14_40_50_480759

  Jul 19 14:40:50.480759 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server [None req-f64a6210-b9b5-4b6d-b7f6-a2d11444741e 
tempest-ServerActionsTestJSON-1906935271 
tempest-ServerActionsTestJSON-1906935271] Exception during message handling: 
MigrationPreCheckError: Migration pre-check error: Migration is not supported 
for LVM backed instances
  Jul 19 14:40:50.481033 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server Traceback (most recent call last):
  Jul 19 14:40:50.481243 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
  Jul 19 14:40:50.481466 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
  Jul 19 14:40:50.481656 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
265, in dispatch
  Jul 19 14:40:50.481794 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, 
ctxt, args)
  Jul 19 14:40:50.481926 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
194, in _do_dispatch
  Jul 19 14:40:50.482062 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
  Jul 19 14:40:50.482190 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 79, in wrapped
  Jul 19 14:40:50.482309 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server function_name, call_dict, binary, tb)
  Jul 19 14:40:50.482433 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Jul 19 14:40:50.482598 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server self.force_reraise()
  Jul 19 14:40:50.482728 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Jul 19 14:40:50.482842 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
  Jul 19 14:40:50.482961 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 69, in wrapped
  Jul 19 14:40:50.483074 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
  Jul 19 14:40:50.487083 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 188, in decorated_function
  Jul 19 14:40:50.487363 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server "Error: %s", e, instance=instance)
  Jul 19 14:40:50.487492 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Jul 19 14:40:50.487609 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]: 
ERROR oslo_messagin

[Yahoo-eng-team] [Bug 1782851] [NEW] Running tox -efast8 no longer works due to zVMConnector installation issues

2018-07-20 Thread Jay Pipes
Public bug reported:

I can no longer run tox -efast8 on my local workstation. I've rm -rf
.tox/fast8 and tried from scratch twice. Continue to get the same error:

```
Collecting zVMCloudConnector===1.2.1 (from -c 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
 (line 144))
  Using cached 
https://files.pythonhosted.org/packages/a0/c2/b7ae60e75aea4c840de0a2a29b22e3949a2bc65aafce1e79baa35023f45d/zVMCloudConnector-1.2.1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
  File "", line 1, in 
  File "/tmp/pip-install-3bfgba_y/zVMCloudConnector/setup.py", line 18, in 

from zvmsdk import version as sdkversion
  File "/tmp/pip-install-3bfgba_y/zVMCloudConnector/zvmsdk/version.py", 
line 29, in 
raise RuntimeError('On Python 3, zvm sdk supports to Python 3.5')
RuntimeError: On Python 3, zvm sdk supports to Python 3.5
```

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- Running tox -efast8 no longer works due to zKVMConnector installation issues
+ Running tox -efast8 no longer works due to zVMConnector installation issues

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782851

Title:
  Running tox -efast8 no longer works due to zVMConnector installation
  issues

Status in OpenStack Compute (nova):
  New

Bug description:
  I can no longer run tox -efast8 on my local workstation. I've rm -rf
  .tox/fast8 and tried from scratch twice. Continue to get the same
  error:

  ```
  Collecting zVMCloudConnector===1.2.1 (from -c 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
 (line 144))
Using cached 
https://files.pythonhosted.org/packages/a0/c2/b7ae60e75aea4c840de0a2a29b22e3949a2bc65aafce1e79baa35023f45d/zVMCloudConnector-1.2.1.tar.gz
  Complete output from command python setup.py egg_info:
  Traceback (most recent call last):
File "", line 1, in 
File "/tmp/pip-install-3bfgba_y/zVMCloudConnector/setup.py", line 18, 
in 
  from zvmsdk import version as sdkversion
File "/tmp/pip-install-3bfgba_y/zVMCloudConnector/zvmsdk/version.py", 
line 29, in 
  raise RuntimeError('On Python 3, zvm sdk supports to Python 3.5')
  RuntimeError: On Python 3, zvm sdk supports to Python 3.5
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1782851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782840] [NEW] No policy enforcement for several delete metadef APIs

2018-07-20 Thread Rick Bartra
Public bug reported:

There is no policy enforcement for the following APIs:

Delete namespace: https://developer.openstack.org/api-ref/image/v2
/metadefs-index.html#delete-namespace

Delete object: https://developer.openstack.org/api-ref/image/v2
/metadefs-index.html#delete-object

Remove resource type association: https://developer.openstack.org/api-
ref/image/v2/metadefs-index.html#remove-resource-type-association

Remove property definition: https://developer.openstack.org/api-
ref/image/v2/metadefs-index.html#remove-property-definition

Delete tag definition: https://developer.openstack.org/api-ref/image/v2
/metadefs-index.html#delete-tag-definition

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1782840

Title:
  No policy enforcement for several delete metadef APIs

Status in Glance:
  New

Bug description:
  There is no policy enforcement for the following APIs:

  Delete namespace: https://developer.openstack.org/api-ref/image/v2
  /metadefs-index.html#delete-namespace

  Delete object: https://developer.openstack.org/api-ref/image/v2
  /metadefs-index.html#delete-object

  Remove resource type association: https://developer.openstack.org/api-
  ref/image/v2/metadefs-index.html#remove-resource-type-association

  Remove property definition: https://developer.openstack.org/api-
  ref/image/v2/metadefs-index.html#remove-property-definition

  Delete tag definition: https://developer.openstack.org/api-
  ref/image/v2/metadefs-index.html#delete-tag-definition

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1782840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760029] Re: ml2 hierarchical port binding cause binding loop

2018-07-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/569715
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=69b83526303329b8239f839b2869acb1128387e5
Submitter: Zuul
Branch:master

commit 69b83526303329b8239f839b2869acb1128387e5
Author: Huang Cheng 
Date:   Mon May 21 14:06:49 2018 +0800

Fix ml2 hierarchical port binding driver check error.

Avoid binding loop caused by the wrong comparison between
"id" and "segmentation_id" of a "segment" object.

Change-Id: Ibc9f3093318d92027eaaf81bd08401c0f02ae414
Closes-Bug: #1760029


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1760029

Title:
  ml2 hierarchical port binding cause binding loop

Status in neutron:
  Fix Released

Bug description:
  Related to: Bug #1745572

  The binding loop still exists due to the mistaken comparison as below.

  https://pastebin.com/RRFX8YfG

  The value 1 refers to the segmentation_id of a "segment" object (e.g., vxlan 
vni, vlan tag).
  The value 2 refers to the uuid of a "segment" object, which will be persisted 
as ml2_port_binding_levels.segment_id after the binding process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1760029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733362] Re: availability_zone extension missing from API ref

2018-07-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/566184
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=29609c6aa84a0acfa44698ea889f1790a6a9bf92
Submitter: Zuul
Branch:master

commit 29609c6aa84a0acfa44698ea889f1790a6a9bf92
Author: Hongbin Lu 
Date:   Thu May 3 23:04:34 2018 +

api-ref: add availability_zone extension

Depends-On: Id882f949cc73a34290c311f3ce3d69d1b809c29f
Change-Id: Icbf427f20ca912a40d68e8042abeeabffeb3005f
Closes-Bug: #1733362


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733362

Title:
  availability_zone extension missing from API ref

Status in neutron:
  Fix Released

Bug description:
  The availability_zone extension is not documented in the API ref:
  - The availability_zones resource is not documented.
  - The az extension is not documented on the agent api-ref nor is the 
availability_zone attribute added to agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691489] Re: fstab entries written by cloud-config may not be mounted

2018-07-20 Thread Scott Moser
** Changed in: cloud-init (Ubuntu Zesty)
   Status: Confirmed => Won't Fix

** Changed in: cloud-init (Ubuntu Artful)
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1691489

Title:
  fstab entries written by cloud-config may not be mounted

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Won't Fix
Status in cloud-init source package in Zesty:
  Won't Fix
Status in cloud-init source package in Artful:
  Won't Fix

Bug description:
  === Begin SRU Template ===
  [Impact]
  There is a race condition on a re-deployment of cloud-init on Azure
  where /mnt will not get properly formatted or mounted.  This is due to
  "dirty" entries in /etc/fstab that cause a device to be busy when
  cloud-init goes to format it.  This shows itself usually as 'mkfs'
  complaining that the device is busy.  The cause is that systemd
  starts an fsck and collides with cloud-init re-formatting the disk.

  The problem can be seen other places but seemed to be most reproducible
  and originally found on Azure.

  [Test Case]
  1.) Launch a Azure vm, ideally size L32S.
  2.) Log in and verify the system properly mounted /mnt.
  3.) Re-deploy the vm through the web ui and try again.

  [Regression Potential]
  Worst case scenario, these changes unnecessarily slow down boot and
  do not fix the problem.

  [Regression]
  This SRU change caused bug 1717477.

  [Other Info]
  Upstream commit at
    https://git.launchpad.net/cloud-init/commit/?id=1f5489c258

  === End SRU Template ===

  As reported in bug 1686514, sometimes /mnt will not get mounted when
  re-delpoying or stopping-then-starting a Azure vm of L32S.  This is
  probably a more generic issue, I suspect shown due to the speed of
  disks on these systems.

  Related bugs:
   * bug 1686514: Azure: cloud-init does not handle reformatting GPT partition 
ephemeral disks
   * bug 1717477: cloud-init generates ordering cycle via After=cloud-init in 
systemd-fsck

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1691489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782786] [NEW] py3.7 test failures - expected string or bytes-like object and _impl.MismatchError

2018-07-20 Thread Corey Bryant
Public bug reported:

I'm hitting 2 remaining test failures after other py3.7 fixes have
landed [1]:

test_validate_patternProperties_fails
  TypeError: expected string or bytes-like object

test_name_with_non_printable_characters
  testtools.matchers._impl.MismatchError

Full tracebacks: https://paste.ubuntu.com/p/hY4N3Yx7FW/

[1] https://review.openstack.org/#/c/584365/

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- py3.7 test failures
+ py3.7 test failures - expected string or bytes-like object and 
_impl.MismatchError

** Description changed:

  I'm hitting 2 remaining test failures after other py3.7 fixes have
  landed [1]:
  
- test_validate_patternProperties_fails - TypeError: expected string or 
bytes-like object
- test_name_with_non_printable_characters - 
testtools.matchers._impl.MismatchError
+ test_validate_patternProperties_fails
+   TypeError: expected string or bytes-like object
+ 
+ test_name_with_non_printable_characters
+   testtools.matchers._impl.MismatchError
  
  Full tracebacks: https://paste.ubuntu.com/p/hY4N3Yx7FW/
  
  [1] https://review.openstack.org/#/c/584365/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782786

Title:
  py3.7 test failures - expected string or bytes-like object and
  _impl.MismatchError

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm hitting 2 remaining test failures after other py3.7 fixes have
  landed [1]:

  test_validate_patternProperties_fails
TypeError: expected string or bytes-like object

  test_name_with_non_printable_characters
testtools.matchers._impl.MismatchError

  Full tracebacks: https://paste.ubuntu.com/p/hY4N3Yx7FW/

  [1] https://review.openstack.org/#/c/584365/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1782786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782746] [NEW] py3.7 async is a keyword

2018-07-20 Thread Corey Bryant
Public bug reported:

I'm working on packaging nova for rocky on ubuntu cosmic which is now at
py3.7. In py3.7 "async" is a keyword, which results in issues such as:

Failed to import test module: nova.tests.unit
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/unittest2/loader.py", line 490, in 
_find_test_path
package = self._get_module_from_name(name)
  File "/usr/lib/python3/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
__import__(name)
  File "/<>/nova/tests/unit/__init__.py", line 30, in 
objects.register_all()
  File "/<>/nova/objects/__init__.py", line 28, in register_all
__import__(\'nova.objects.aggregate\')
  File "/<>/nova/objects/aggregate.py", line 23, in 
from nova.db.sqlalchemy import api as db_api
  File "/<>/nova/db/sqlalchemy/api.py", line 218
reader_mode = get_context_manager(context).async
   ^
SyntaxError: invalid syntax

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782746

Title:
  py3.7 async is a keyword

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm working on packaging nova for rocky on ubuntu cosmic which is now
  at py3.7. In py3.7 "async" is a keyword, which results in issues such
  as:

  Failed to import test module: nova.tests.unit
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/unittest2/loader.py", line 490, in 
_find_test_path
  package = self._get_module_from_name(name)
File "/usr/lib/python3/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
  __import__(name)
File "/<>/nova/tests/unit/__init__.py", line 30, in 
  objects.register_all()
File "/<>/nova/objects/__init__.py", line 28, in register_all
  __import__(\'nova.objects.aggregate\')
File "/<>/nova/objects/aggregate.py", line 23, in 
  from nova.db.sqlalchemy import api as db_api
File "/<>/nova/db/sqlalchemy/api.py", line 218
  reader_mode = get_context_manager(context).async
 ^
  SyntaxError: invalid syntax

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1782746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782732] [NEW] Sorting of items in panels is not done correctly when there is huge list

2018-07-20 Thread Majety Sri Ashika Meher
Public bug reported:

When trying to sort Flavors (e.g. per number of "VCPUs"), or Instances
(e.g. per "Time since created"), in case of huge list spreading over
two(2) or more sheets (pages), sorting mechanism sorts only list of
Flavors or Instances on dedicated page. Further pages (e.g. 2nd) is not
affected by sorting mechanism. But the sorting mechanism in Images panel
sorts the entire list. This is because of difference in table
implementations for panels in python and angular. So, all panels should
be implemented in angular for proper sorting.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1782732

Title:
  Sorting of items in panels is  not done correctly when there is huge
  list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When trying to sort Flavors (e.g. per number of "VCPUs"), or Instances
  (e.g. per "Time since created"), in case of huge list spreading over
  two(2) or more sheets (pages), sorting mechanism sorts only list of
  Flavors or Instances on dedicated page. Further pages (e.g. 2nd) is
  not affected by sorting mechanism. But the sorting mechanism in Images
  panel sorts the entire list. This is because of difference in table
  implementations for panels in python and angular. So, all panels
  should be implemented in angular for proper sorting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1782732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782704] Re: keystone "--config-file" cli argument not work

2018-07-20 Thread armageddon
** Project changed: keystone-mapper => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1782704

Title:
  keystone "--config-file" cli argument not work

Status in OpenStack Identity (keystone):
  New

Bug description:
  Keystone queens wersion. 
  Move keystone.conf to a custom location and run `keystone-manage 
--config-file {my custom location}`, the keystone-manage will not find my conf 
file and `Config file not found, using default configs.` will be promted.

  Reproduce:
  1. git clone keystone queens
  2. pip install -r requirements.txt && pip install --prefix=/openstack
  3. Run `keystone-manage --config-file /openstack/etc/keystone/keystone.conf
  4. The prompt will be printed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1782704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782704] [NEW] keystone "--config-file" cli argument not work

2018-07-20 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Keystone queens wersion. 
Move keystone.conf to a custom location and run `keystone-manage --config-file 
{my custom location}`, the keystone-manage will not find my conf file and 
`Config file not found, using default configs.` will be promted.

Reproduce:
1. git clone keystone queens
2. pip install -r requirements.txt && pip install --prefix=/openstack
3. Run `keystone-manage --config-file /openstack/etc/keystone/keystone.conf
4. The prompt will be printed.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
keystone "--config-file" cli argument not work
https://bugs.launchpad.net/bugs/1782704
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Identity (keystone).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782714] [NEW] Properties of an attached volume are lost after live migration

2018-07-20 Thread Viktor Tikkanen
Public bug reported:

Steps to reproduce the problem:

1. Launch an instance.
2. Create a volume and attach it to the instance. The volume will have 
"attached_mode='rw'" -property:

[cloudadmin@controller-1 ~(admin)]$ openstack volume list --long
+--+--++--+--+--+++
| ID   | Name | Status | Size | Type | 
Bootable | Attached to| Properties |
+--+--++--+--+--+++
| 4394bec9-3b87-4a6e-b977-cb56719b0d2a | test_vol | in-use |1 | None | 
false| Attached to cirros-01 on /dev/vdb  | attached_mode='rw' |
+--+--++--+--+--+++

3. Start live migration of the instance:

[cloudadmin@controller-1 ~(admin)]$ openstack server migrate --live
compute-2 --block-migration cirros-01

4. After completion of the migration volume properties are lost:

[cloudadmin@controller-1 ~(admin)]$ openstack volume list --long
+--+--++--+--+--+--++
| ID   | Name | Status | Size | Type | 
Bootable | Attached to  
| Properties |
+--+--++--+--+--+--++
| 4394bec9-3b87-4a6e-b977-cb56719b0d2a | test_vol | in-use |1 | None | 
false| Attached to cirros-01 on /dev/vdb Attached to cirros-01 on /dev/vdb  
| attached_mode='rw' |
+--+--++--+--+--+--++
[cloudadmin@controller-1 ~(admin)]$
[cloudadmin@controller-1 ~(admin)]$ openstack volume list --long
+--+--++--+--+--+++
| ID   | Name | Status | Size | Type | 
Bootable | Attached to| Properties |
+--+--++--+--+--+++
| 4394bec9-3b87-4a6e-b977-cb56719b0d2a | test_vol | in-use |1 | None | 
false| Attached to cirros-01 on /dev/vdb  ||
+--+--++--+--+--+++
[cloudadmin@controller-1 ~(admin)]$

Version information:

[cloudadmin@controller-1 ~(admin)]$ sudo rpm -qa|grep nova
openstack-ansible-os_nova-17.0.2-1.el7.centos.ncir.2.noarch
python-nova-17.0.2-1.el7.centos.ncir.2.noarch
openstack-nova-placement-api-17.0.2-1.el7.centos.ncir.2.noarch
openstack-nova-novncproxy-17.0.2-1.el7.centos.ncir.2.noarch
nova-inventory-c2.g8617a07-1.el7.centos.ncir.noarch
openstack-nova-scheduler-17.0.2-1.el7.centos.ncir.2.noarch
openstack-nova-conductor-17.0.2-1.el7.centos.ncir.2.noarch
python2-novaclient-10.1.0-1.el7.noarch
openstack-nova-compute-17.0.2-1.el7.centos.ncir.2.noarch
openstack-nova-api-17.0.2-1.el7.centos.ncir.2.noarch
openstack-nova-common-17.0.2-1.el7.centos.ncir.2.noarch
openstack-nova-console-17.0.2-1.el7.centos.ncir.2.noarch

[cloudadmin@controller-1 ~(admin)]$ sudo rpm -qa|grep cinder
python-cinder-12.0.2-2.el7.noarch
openstack-cinder-12.0.2-2.el7.noarch
python2-cinderclient-3.5.0-1.el7.noarch
openstack-ansible-os_cinder-17.0.2-1.el7.centos.ncir.1.noarch
[cloudadmin@controller-1 ~(admin)]$

ceph: 12.2.5
libvirt: 3.9.0

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782714

Title:
  Properties of an attached volume are lost after live migration

Status in OpenStack Compute (nova):
  New

Bug description:
  Steps to reproduce the problem:

  1. Launch an instance.
  2. Create a volume and attach it to the instance. The volume will have 
"attached_mode='rw'" -property:

  [cloudadmin@controller-1 ~(admin)]$ openstack volume list --long
  
+--+--++--+--+--+++
  | ID   | Name | Status | Size | Type | 
Bootable | Attached to| Properties |
  
+--+--++--+--+-