[Yahoo-eng-team] [Bug 1735780] Re: n-cpu logs error "is not a valid LUKS device" (at debug level)

2018-01-28 Thread Rajat Dhasmana
** Also affects: os-brick
   Importance: Undecided
   Status: New

** Changed in: os-brick
 Assignee: (unassigned) => Rajat Dhasmana (whoami-rajat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1735780

Title:
  n-cpu logs error "is not a valid LUKS device" (at debug level)

Status in OpenStack Compute (nova):
  Confirmed
Status in os-brick:
  New
Status in oslo.privsep:
  New

Bug description:
  Probably easiest way to reproduce this is to run the tempest test
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_boot_server_from_encrypted_volume_luks.
  The test passes successfully, but will produce an error in the n-cpu
  logs during the server creation. This may also be indicative that even
  though the test passes, it is not working as intended, but I don't
  believe this is the case (from what I've been able to tell, the server
  creation is successful and the volume is properly encrypted at all
  stages of the test). Logs attached [1].

  [1] http://paste.openstack.org/show/627976/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1735780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745920] [NEW] EC2 Metadata Service cloud-init warning on Packet.net bare-metal server

2018-01-28 Thread Adam Fields
Public bug reported:

This is most likely an issue with the cloud provider (Packet), but I'm
filing the issue per the message:

**
# This system is using the EC2 Metadata Service, but does not appear to  #
# be running on Amazon EC2 or one of cloud-init's known platforms that   #
# provide a EC2 Metadata service. In the future, cloud-init may stop #
# reading metadata from the EC2 Metadata Service unless the platform can #
# be identified. #
##
# If you are seeing this message, please file a bug against  #
# cloud-init at  #
#https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
# Make sure to include the cloud provider your instance is   #
# running on.#
##
# For more information see   #
#   https://bugs.launchpad.net/bugs/1660385  #
##
# After you have filed a bug, you can disable this warning by#
# launching your instance with the cloud-config below, or#
# putting that content into  #
#/etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg#
##
# #cloud-config  #
# datasource:#
#  Ec2:  #
#   strict_id: false #
**

Feel free to close as this isn't likely a bug with cloud-init itself.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: dsid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1745920

Title:
  EC2 Metadata Service cloud-init warning on Packet.net bare-metal
  server

Status in cloud-init:
  New

Bug description:
  This is most likely an issue with the cloud provider (Packet), but I'm
  filing the issue per the message:

  **
  # This system is using the EC2 Metadata Service, but does not appear to  #
  # be running on Amazon EC2 or one of cloud-init's known platforms that   #
  # provide a EC2 Metadata service. In the future, cloud-init may stop #
  # reading metadata from the EC2 Metadata Service unless the platform can #
  # be identified. #
  ##
  # If you are seeing this message, please file a bug against  #
  # cloud-init at  #
  #https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
  # Make sure to include the cloud provider your instance is   #
  # running on.#
  ##
  # For more information see   #
  #   https://bugs.launchpad.net/bugs/1660385  #
  ##
  # After you have filed a bug, you can disable this warning by#
  # launching your instance with the cloud-config below, or#
  # putting that content into  #
  #/etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg#
  ##
  # #cloud-config  #
  # datasource:#
  #  Ec2:  #
  #   strict_id: false #
  **

  Feel free to close as this isn't likely a bug with cloud-init itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1745920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More 

[Yahoo-eng-team] [Bug 1741667] Re: live snapshot of a paused instance hangs

2018-01-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/532214
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=af326fd6f64cb331e53e87400330fffe509f0461
Submitter: Zuul
Branch:master

commit af326fd6f64cb331e53e87400330fffe509f0461
Author: Matt Riedemann 
Date:   Tue Jan 9 10:16:21 2018 -0500

libvirt: don't attempt to live snapshot paused instances

When we changed the default value of the
workarounds.disable_libvirt_livesnapshot config option value
to False in 980d0fcd75c2b15ccb0af857a9848031919c6c7d earlier
in Queens, we were testing against the Pike UCA packages which
has libvirt 3.6.0 and qemu 2.10. Live snapshots of a paused
instance work with those package versions as shown by the
test_create_image_from_paused_server test in Tempest.

However, if you just use the Ubuntu 16.04 packages for libvirt
(1.3.1) and qemu (2.5), that test fails and the live snapshot hangs
on the paused instance.

This change adds PAUSED to a list of power states that aren't
valid for live snapshot. We can eventually remove this when we
require (or add a conditional check for) libvirt>=3.6.0 and
qemu>=2.10.

Change-Id: If6c4dd6890ad6e2d00b186c6a9aa85f507b354e0
Closes-Bug: #1741667


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741667

Title:
  live snapshot of a paused instance hangs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seeing this on this CI job where the Pike UCA is not enabled so
  libvirt 1.3.1 is being used, and I don't think we really support
  libvirt live snapshot in CI until at least libvirt 3.6.0.

  http://logs.openstack.org/86/531386/5/check/tempest-full/2932954/job-
  output.txt.gz#_2018-01-06_03_19_12_728469

  In this case, the live snapshot on the paused instance just hangs. If
  you trace req-f7805820-c671-487f-8043-a1fe30dd0372 through the n-cpu
  logs you'll see it just hangs:

  http://logs.openstack.org/86/531386/5/check/tempest-
  full/2932954/controller/logs/screen-n-cpu.txt

  Jan 06 02:41:44.636751 ubuntu-xenial-infracloud-vanilla-0001712754
  nova-compute[10798]: WARNING nova.compute.manager [None
  req-f7805820-c671-487f-8043-a1fe30dd0372 tempest-
  ImagesTestJSON-1310987708 tempest-ImagesTestJSON-1310987708]
  [instance: 8c15c0d7-667d-40f8-b2d8-b6adb6a321e7] trying to snapshot a
  non-running instance: (state: 3 expected: 1)

  We should probably not even attempt a live snapshot on a paused
  instance since that doesn't really make sense.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745905] [NEW] system scope doesn't work for the service which use project specified endpoint

2018-01-28 Thread wangxiyuan
Public bug reported:

For some project, such as Cinder, the endpoint is project specified, the format 
is like:
http://ip/volume/v3/{project_id}/volumes

There are two problem:
1. For this kind of endpoint, system-scoped token doesn't work because that 
there is no project_id in the token.

2. When issue a system-scoped token, the Cinder's endpoint in the token
catalog is empty. It means the Cinder service will not be discoverable
when use system-scoped token.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: keystone
 Importance: Undecided
 Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1745905

Title:
  system scope doesn't work for the service which use project specified
  endpoint

Status in Cinder:
  New
Status in OpenStack Identity (keystone):
  New

Bug description:
  For some project, such as Cinder, the endpoint is project specified, the 
format is like:
  http://ip/volume/v3/{project_id}/volumes

  There are two problem:
  1. For this kind of endpoint, system-scoped token doesn't work because that 
there is no project_id in the token.

  2. When issue a system-scoped token, the Cinder's endpoint in the
  token catalog is empty. It means the Cinder service will not be
  discoverable when use system-scoped token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1745905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745367] Re: Tabs in launch server wizard not depending on OPENSTACK_NOVA_EXTENSIONS_BLACKLIST

2018-01-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/538207
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=c7bc9242b9fa416f510023f50898de1d963909c9
Submitter: Zuul
Branch:master

commit c7bc9242b9fa416f510023f50898de1d963909c9
Author: David Gutman 
Date:   Thu Jan 25 16:17:55 2018 +0100

Tabs in launch server wizard not depending on 
OPENSTACK_NOVA_EXTENSIONS_BLACKLIST

OPENSTACK_NOVA_EXTENSIONS_BLACKLIST is used to disable specific extension.
In the launch instance wizard, by example the Server Group tab should be 
hidden
when the extension "ServerGroups" is blacklisted but it isn't.

It could be interesting to make these tabs dependent of the supported 
extensions.

Change-Id: I15ea0f1010e3889c217c63e98f1752a4c1ad9ceb
Closes-Bug: #1745367


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1745367

Title:
  Tabs in launch server wizard not depending on
  OPENSTACK_NOVA_EXTENSIONS_BLACKLIST

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  OPENSTACK_NOVA_EXTENSIONS_BLACKLIST is used to disable specific extension.
  In the launch instance wizard, by example the Server Group tab should be 
hidden when the extension "ServerGroups" is blacklisted but it isn't.

  It could be interesting to make these tabs dependent of the supported
  extensions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1745367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745873] [NEW] i18n: on subnet is hard to translate

2018-01-28 Thread Akihiro Motoki
Public bug reported:

In "Parent Port" and "Sub Ports" tabs of "Create Trunk" / "Edit Trunk"
form, the "IP" column of the port table contains information of the
format of " on subnet ", but it is difficult to
translate considering the word order because only "on subnet" is marked
as translation string.

Also, " on subnet " in network ports" tab in the
"Create Instance" workflow is not marked as translation string.

Angular gettext supports "Translate parameters" feature [1]. By using
this, we can support the word order in translation.

[1] https://angular-gettext.rocketeer.be/dev-guide/translate-params/

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

** Changed in: horizon
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

** Changed in: horizon
   Importance: Undecided => Low

** Changed in: horizon
Milestone: None => queens-rc1

** Tags added: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1745873

Title:
  i18n:  on subnet  is hard to translate

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In "Parent Port" and "Sub Ports" tabs of "Create Trunk" / "Edit Trunk"
  form, the "IP" column of the port table contains information of the
  format of " on subnet ", but it is difficult to
  translate considering the word order because only "on subnet" is
  marked as translation string.

  Also, " on subnet " in network ports" tab in the
  "Create Instance" workflow is not marked as translation string.

  Angular gettext supports "Translate parameters" feature [1]. By using
  this, we can support the word order in translation.

  [1] https://angular-gettext.rocketeer.be/dev-guide/translate-params/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1745873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745838] [NEW] legacy-tempest-dsvm-cells constantly failing on stable pike and ocata due to libvirt connection reset

2018-01-28 Thread Matt Riedemann
Public bug reported:

The cellsv1 job has been failing pretty constantly within the last week
or two due to a libvirt connection reset:

http://logs.openstack.org/36/536936/1/check/legacy-tempest-dsvm-
cells/a9ff792/logs/libvirt/libvirtd.txt.gz#_2018-01-28_01_25_23_762

2018-01-28 01:25:23.762+: 3896: error :
virKeepAliveTimerInternal:143 : internal error: connection closed due to
keepalive timeout

http://logs.openstack.org/36/536936/1/check/legacy-tempest-dsvm-
cells/a9ff792/logs/screen-n-cpu.txt.gz?level=TRACE#_2018-01-28_01_25_23_766

2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager 
[req-392410f9-c834-4bdc-a439-ac20476fe212 - -] Error updating resources for 
node ubuntu-xenial-inap-mtl01-0002208439.
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager Traceback (most recent 
call last):
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6590, in 
update_available_resource_for_node
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename)
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 535, in 
update_available_resource
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(nodename)
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5675, in 
get_available_resource
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager data["vcpus_used"] 
= self._get_vcpu_used()
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5316, in _get_vcpu_used
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager for guest in 
self._host.list_guests():
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 573, in list_guests
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager 
only_running=only_running, only_guests=only_guests)]
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 593, in 
list_instance_domains
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager alldoms = 
self.get_connection().listAllDomains(flags)
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager result = 
proxy_call(self._autowrap, f, *args, **kwargs)
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager rv = execute(f, 
*args, **kwargs)
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager six.reraise(c, e, 
tb)
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager rv = meth(*args, 
**kwargs)
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 4953, in 
listAllDomains
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager raise 
libvirtError("virConnectListAllDomains() failed", conn=self)
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager libvirtError: Cannot 
recv data: Connection reset by peer
2018-01-28 01:25:23.766 16360 ERROR nova.compute.manager 

It seems to be totally random. I'm not sure what is different about this
job running on stable vs master, but it doesn't appear to be an issue on
master:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22libvirtError%3A%20Cannot%20recv%20data%3A%20Connection%20reset%20by%20peer%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22%20AND%20build_name%3A%5C
%22legacy-tempest-dsvm-cells%5C%22=7d

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cells libvirt testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1745838

Title:
  legacy-tempest-dsvm-cells constantly failing on stable pike and ocata
  due to libvirt connection reset

Status in OpenStack Compute (nova):
  New

Bug description:
  The cellsv1 job has been failing pretty constantly within the last
  week or two due to a libvirt connection reset:

  http://logs.openstack.org/36/536936/1/check/legacy-tempest-dsvm-
  cells/a9ff792/logs/libvirt/libvirtd.txt.gz#_2018-01-28_01_25_23_762

  2018-01-28 01:25:23.762+: 3896: error