[Yahoo-eng-team] [Bug 1719806] Re: IPv4 subnets added when VM is already up on an IPv6 subnet on the same network, does not enable VM ports to get IPv4 address

2018-03-02 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1719806

Title:
  IPv4 subnets added when VM is already up on an IPv6 subnet on the same
  network, does not enable VM ports to get IPv4 address

Status in neutron:
  Expired

Bug description:
  On both stable/pike and stable/ocata, we performed the following
  steps:

  1. Create a network
  2. Create an IPv6 subnet in SLAAC Mode (both RA mode and Address mode)
  3. Create a router
  4. Attach the IPv6 subnet to the router
  5. Now boot VMs with the network-id.
  6. Make sure VMs are up and able to communicate via their Global and 
Link-Local IPv6 addresses.
  7. Create an IPv4 subnet on the same network.

  After step 5, you will notice that the booted VM neutron ports fixed-
  ips are not updated with IPv4 subnets automatically.

  The user has to manually update the VM Neutron ports via port-update
  command with the IPv4 subnet-id and then go back to the VM and recycle
  eth0 after which only the VMs will get the IPv4 address.

  The DHCP Neutron port alone got updated automatically with the IPv4
  address in addition to IPv6 address with the above steps.

  Any new VMs spawned after both IPv4 and IPv6 subnets are available on
  the network, is able to get both the addresses and its Neutron Ports
  in the control plane also reflect the same.

  BTW, if the above set of steps are followed just by swapping the order
  where we create IPv4 subnets first, then boot VMs, create an IPv6
  subnet on the same network, the VM Neutron Ports fixed-ips get updated
  automatically  with the new assigned IPv6 Global addresses on the IPv6
  subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1719806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752986] [NEW] Live migrate UnexpectedTaskStateError

2018-03-02 Thread Kevin Stevens
Public bug reported:

Description
===
Occasionally, when performing live migration, the instance goes to ERROR state 
with the message "UnexpectedTaskStateError: Conflict updating instance . 
Expected: {'task_state': [u'migrating']}. Actual: {'task_state': None}"
The migration is always successful so far and the instance is always found 
residing on the target host. Updating the instances table "node" and "host" 
rows in the nova database with the destination host and then resetting the 
instance state to ACTIVE gets things back in order.

The issue appears to occur randomly across 16 uniform compute nodes and
seems like a race condition.

Steps to reproduce
==
1. Boot an instance to Compute01
2. Issue a live-migrate command for instance targeting Compute02 (this can be 
done via Horizon and via python-openstackclient)
# openstack server migrate --shared-migration --live computehost02 
e8928cb2-afae-4cca-93db-f218e9f22324
3. Live-migration works, the instance remains accessible and is moved to the 
new host. However, ~20% of the time, the instance goes to ERROR state and some 
cleanup must be done in the database.
MariaDB [nova]> update instances set 
node='computehost02',host='computehost02' where uuid= 
'e8928cb2-afae-4cca-93db-f218e9f22324';
# openstack server set --state active e8928cb2-afae-4cca-93db-f218e9f22324

Expected result
===
The migrated instance should move successfully and return to ACTIVE state.

Actual result
=
Instances occassional end up in ERROR state after a "successful" live-migration.

Environment
===
1. This is a Newton environment with Nova Libvirt/KVM backed by Ceph. 
Networking is provided by Neutron ML2 linux bridge agent.
root@computehost02:~# nova-compute --version
14.0.8
root@computehost02:~# dpkg -l | grep libvir
ii  libvirt-bin  1.3.1-1ubuntu10.15 
amd64programs for the libvirt library
ii  libvirt0:amd64   1.3.1-1ubuntu10.15 
amd64library for interfacing with different virtualization 
systems
ii  python-libvirt   1.3.1-1ubuntu1.1   
amd64libvirt Python bindings
root@computehost02:~# dpkg -l | grep qemu
ii  ipxe-qemu
1.0.0+git-20150424.a25a16d-1ubuntu1.2  all  PXE boot firmware - ROM 
images for qemu
ii  qemu 1:2.5+dfsg-5ubuntu10.16
amd64fast processor emulator
ii  qemu-block-extra:amd64   1:2.5+dfsg-5ubuntu10.16
amd64extra block backend modules for qemu-system and qemu-utils
ii  qemu-slof20151103+dfsg-1ubuntu1 
all  Slimline Open Firmware -- QEMU PowerPC version
ii  qemu-system  1:2.5+dfsg-5ubuntu10.16
amd64QEMU full system emulation binaries
ii  qemu-system-arm  1:2.5+dfsg-5ubuntu10.16
amd64QEMU full system emulation binaries (arm)
ii  qemu-system-common   1:2.5+dfsg-5ubuntu10.16
amd64QEMU full system emulation binaries (common files)
ii  qemu-system-mips 1:2.5+dfsg-5ubuntu10.16
amd64QEMU full system emulation binaries (mips)
ii  qemu-system-misc 1:2.5+dfsg-5ubuntu10.16
amd64QEMU full system emulation binaries (miscelaneous)
ii  qemu-system-ppc  1:2.5+dfsg-5ubuntu10.16
amd64QEMU full system emulation binaries (ppc)
ii  qemu-system-sparc1:2.5+dfsg-5ubuntu10.16
amd64QEMU full system emulation binaries (sparc)
ii  qemu-system-x86  1:2.5+dfsg-5ubuntu10.16
amd64QEMU full system emulation binaries (x86)
ii  qemu-user1:2.5+dfsg-5ubuntu10.16
amd64QEMU user mode emulation binaries
ii  qemu-utils   1:2.5+dfsg-5ubuntu10.16
amd64QEMU utilities
root@computehost02:~#

root@cephhost:~# dpkg -l | grep -i ceph
ii  ceph 10.2.10-1xenial
amd64distributed storage and file system
ii  ceph-base10.2.10-1xenial
amd64common ceph daemon libraries and management tools
ii  ceph-common  10.2.10-1xenial
amd64common utilities to mount and interact with a ceph storage 
cluster
ii  ceph-mon 10.2.10-1xenial
amd64monitor server for the ceph 

[Yahoo-eng-team] [Bug 1752977] [NEW] Azure Preprovisioning waits an extra 60s due to timeout

2018-03-02 Thread Douglas Jordan
Public bug reported:

During Azure preprovisioning, the platform owned VM gets into a polling
loop while we are waiting for the reprovsioning data file (the
customer's ovf_env.xml). If the VM moves from one vnet to another, we
will get a timeout exception on the request to IMDS and thus need to re-
dhcp to get the new IP and network configuration. Currently, that
timeout is 60seconds, and upon further testing IMDS responds within 5-20
milliseconds so we propose moving the timeout to 1 second in order to
speed up the total deployment time for the customer.

In addition, due to IMDS server updates there is a chance it will be
unreachable. If it is, then we could be re-dhcping multiple times, and
it is possible to hit the current limit (5) especially with a max retry
of 5. That would put the VM in an effectively useless state where it
would not be possible to recover. We propose removing the max dhcp retry
count for preprovisioning.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Merge proposal linked:
   https://code.launchpad.net/~dojordan/cloud-init/+git/cloud-init/+merge/340546

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1752977

Title:
  Azure Preprovisioning waits an extra 60s due to timeout

Status in cloud-init:
  New

Bug description:
  During Azure preprovisioning, the platform owned VM gets into a
  polling loop while we are waiting for the reprovsioning data file (the
  customer's ovf_env.xml). If the VM moves from one vnet to another, we
  will get a timeout exception on the request to IMDS and thus need to
  re-dhcp to get the new IP and network configuration. Currently, that
  timeout is 60seconds, and upon further testing IMDS responds within
  5-20 milliseconds so we propose moving the timeout to 1 second in
  order to speed up the total deployment time for the customer.

  In addition, due to IMDS server updates there is a chance it will be
  unreachable. If it is, then we could be re-dhcping multiple times, and
  it is possible to hit the current limit (5) especially with a max
  retry of 5. That would put the VM in an effectively useless state
  where it would not be possible to recover. We propose removing the max
  dhcp retry count for preprovisioning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1752977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752711] Re: cloud-init no longer processes user data on GCE in artful

2018-03-02 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init -
17.2-35-gf576b2a2-0ubuntu1~17.10.2

---
cloud-init (17.2-35-gf576b2a2-0ubuntu1~17.10.2) artful-proposed; urgency=medium

  * cherry-pick 40e7738: GCE: fix reading of user-data that is not
base64 encoded. (LP: #1752711)

 -- Chad Smith   Thu, 01 Mar 2018 16:03:46
-0700

** Changed in: cloud-init (Ubuntu Artful)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1752711

Title:
  cloud-init no longer processes user data on GCE in artful

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Committed
Status in cloud-init source package in Artful:
  Fix Released
Status in cloud-init source package in Bionic:
  Fix Released

Bug description:
  === Begin SRU Template ===
  [Impact]
  Any user-data provided when creating google cloud instances is ignored so no 
instance customization is observed. This is a silent failure and no tracebacks 
in cloud-init represent that failure to the user.

  Providing a simple cloud-config to set a hostname will provide a quick
  validation of cloud-init observing user-data.

  [Test Case]

  # Create cloud-config which should change the hostname, and cli prompt
  $ cat > sethostname.yaml 

[Yahoo-eng-team] [Bug 1750591] Re: Admin-deployed qos policy breaks tenant port creation

2018-03-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/546973
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8c23e357095b986ddd966c2d280feb2eabb933bf
Submitter: Zuul
Branch:master

commit 8c23e357095b986ddd966c2d280feb2eabb933bf
Author: Sławek Kapłoński 
Date:   Thu Feb 22 13:36:43 2018 +0100

Fix creation of port when network has admin's QoS policy set

In case when admin user creats QoS policy and will attach it to
user's namespace there was an issue with getting such QoS policy
with user's context to validate it.

This patch changes it, that QoS policy is always get with elevated
context during port/network create/update validation.

Change-Id: I464888ca3920b42edd6ab638f6a317ee51ef0994
Closes-Bug: #1750591


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750591

Title:
  Admin-deployed qos policy breaks tenant port creation

Status in neutron:
  Fix Released

Bug description:
  This is mainly following https://docs.openstack.org/neutron/pike/admin
  /config-qos.html, steps to reproduce:

  1. Admin creates qos policy "default" in admin project
  2. User creates network "mynet" in user project
  3. Admin applies qos policy to tenant network via "openstack network set 
--qos-policy default mynet"
  4. User tries to create (an instance with) a port in "mynet".

  Result: Neutron fails with "Internal server error". q-svc.log shows

  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
[req-f20ed290-5a24-44fe-9b2a-9cc2a6caacbc 8585b4c745184f538091963331dad1c7 
8b039227731847a0b62eddfde3ab17c0 - default default] POST failed.: 
CallbackFailure: Callback
   
neutron.services.qos.qos_plugin.QoSPlugin._validate_create_port_callback--9223372036854470149
 failed with "'NoneType' object has no attribute 'rules'"
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/pecan/core.py", line 678, in __call__
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.invoke_controller(controller, args, kwargs, state)
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/pecan/core.py", line 569, in 
invoke_controller
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
result = controller(*args, **kwargs)
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 93, in wrapped
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
setattr(e, '_RETRY_EXCEEDED', True)
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 89, in wrapped
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 150, in wrapper
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
ectxt.value = e.inner_exc
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
  2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   
File 

[Yahoo-eng-team] [Bug 1501206] Re: router:dhcp ports are open resolvers

2018-03-02 Thread new
** Changed in: neutron (Ubuntu)
 Assignee: (unassigned) => new (cloudie)

** Changed in: neutron (Ubuntu)
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501206

Title:
  router:dhcp ports are open resolvers

Status in neutron:
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix
Status in neutron package in Ubuntu:
  In Progress

Bug description:
  When configuring an public IPv4 subnet with DHCP enabled inside
  Neutron (and attaching it to an Internet-connected router), the DNS
  recursive resolver service provided by dnsmasq inside the qdhcp
  network namespace will respond to DNS queries from the entire
  Internet. This is a huge problem from a security standpoint, as open
  resolvers are very likely to be abused for DDoS purposes. This does
  not only cause significant damage to third parties (i.e., the true
  destination of the DDoS attack and every network in between), but also
  on the local network or servers (due to saturation of all the
  available network bandwidth and/or the processing capacity of the node
  running the dnsmasq instance). Quoting from
  http://openresolverproject.org/:

  «Open Resolvers pose a significant threat to the global network
  infrastructure by answering recursive queries for hosts outside of its
  domain. They are utilized in DNS Amplification attacks and pose a
  similar threat as those from Smurf attacks commonly seen in the late
  1990s.

  [...]

  What can I do?

  If you operate a DNS server, please check the settings.

  Recursive servers should be restricted to your enterprise or customer
  IP ranges to prevent abuse. Directions on securing BIND and Microsoft
  nameservers can be found on the Team CYMRU Website - If you operate
  BIND, you can deploy the TCP-ANY patch»

  It seems reasonable to expect that the dnsmasq instance within Neutron
  would only respond to DNS queries from the subnet prefixes it is
  associated with and ignore all others.

  Note that this only occurs for IPv4. That is however likely just a
  symptom of bug #1499170, which breaks all IPv6 DNS queries (external
  as well as internal). I would assume that when bug #1499170 is fixed,
  the router:dhcp ports will immediately start being open resolvers over
  IPv6 too.

  For what it's worth, the reason I noticed this issue in the first
  place was that NorCERT (the national Norwegian Computer Emergency
  Response Team - http://www.cert.no/) got in touch with us, notifying
  us about the open resolvers they had observed in our network and
  insisted that we lock them down ASAP. It only took NorCERT couple of
  days after the subnet was first created to do so.

  Tore

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752917] [NEW] inappropriate mocking in openstack_dashboard/test/unit/api/rest/test_*.py

2018-03-02 Thread Akihiro Motoki
Public bug reported:

Tests under openstack_dashboard/test/unit/api/rest mocks the whole API modules 
like @mock.patch.patch(api, 'neutron').
It is not an appropriate mocking. We should mock only methods we really expect 
to call.

In addition, there are several points to be improved in these tests.

- Unnecessary usage of test.mock_factory(). Test data can be used directly.
- Related to the above, setUp() is unnecessary because they are used to prepare 
data with mock_factory.
- Test data should be accessed via something like self.networks.list() rather 
than calling TestData directly.

** Affects: horizon
 Importance: High
 Status: New

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1752917

Title:
  inappropriate mocking in
  openstack_dashboard/test/unit/api/rest/test_*.py

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Tests under openstack_dashboard/test/unit/api/rest mocks the whole API 
modules like @mock.patch.patch(api, 'neutron').
  It is not an appropriate mocking. We should mock only methods we really 
expect to call.

  In addition, there are several points to be improved in these tests.

  - Unnecessary usage of test.mock_factory(). Test data can be used directly.
  - Related to the above, setUp() is unnecessary because they are used to 
prepare data with mock_factory.
  - Test data should be accessed via something like self.networks.list() rather 
than calling TestData directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1752917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752903] [NEW] Floating IPs should not allocate IPv6 addresses

2018-03-02 Thread Dr. Jens Harbott
Public bug reported:

When there are both IPv4 and IPv6 subnets on the public network, the
port that a floating IP allocates there gets assigned addresses from
both subnets. Since a floation IP is a pure IPv4 construct, allocating
an IPv6 address for it is completely useless and should be avoided,
because it will for example block removing the IPv6 subnet without a
good reason. Seen in Pike as well as in master.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1752903

Title:
  Floating IPs should not allocate IPv6 addresses

Status in neutron:
  New

Bug description:
  When there are both IPv4 and IPv6 subnets on the public network, the
  port that a floating IP allocates there gets assigned addresses from
  both subnets. Since a floation IP is a pure IPv4 construct, allocating
  an IPv6 address for it is completely useless and should be avoided,
  because it will for example block removing the IPv6 subnet without a
  good reason. Seen in Pike as well as in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1752903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752896] [NEW] novncproxy in Newton uses outdated novnc 0.5 which breaks Nova noVNC consoles

2018-03-02 Thread Felipe Alfaro Solana
Public bug reported:

Delorean Newton (CentOS 7) ships with noVNC 0.5.2 in the Overcloud
images. Even building an Overcloud image (DIB) produces an image with
noVNC 0.5.2. The problem seems to be that CentOS 7 does not ship
anything newer than 0.5.2. However, Red Hat Enterprise Linux 7 does
indeed ship noVNC 0.6.

In any case, Nova noVNC consoles in Newton don't work with noVNC 0.5.2.
My workaround was to customize the Overcloud base image and replace the
0.5.2 RPM with a 0.6.2 RPM that I downloaded from some CentOS CI
repository.

Steps to reproduce
==

Follow instructions from 
https://docs.openstack.org/tripleo-docs/latest/install/installation/installing.html
 to install an OpenStack Undercloud using Newton, and either download the 
Overcloud base images from here 
https://images.rdoproject.org/newton/delorean/consistent/testing/ or build them 
yourself directly from the Undercloud.

In any case, the Overcloud base image ships with noVNC 0.5.2-1 instead
of 0.6.*.

Expected result
===

A newer version of noVNC that does not break the Nova noVNC console.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1752896

Title:
  novncproxy in Newton uses outdated novnc 0.5 which breaks Nova noVNC
  consoles

Status in OpenStack Compute (nova):
  New

Bug description:
  Delorean Newton (CentOS 7) ships with noVNC 0.5.2 in the Overcloud
  images. Even building an Overcloud image (DIB) produces an image with
  noVNC 0.5.2. The problem seems to be that CentOS 7 does not ship
  anything newer than 0.5.2. However, Red Hat Enterprise Linux 7 does
  indeed ship noVNC 0.6.

  In any case, Nova noVNC consoles in Newton don't work with noVNC
  0.5.2. My workaround was to customize the Overcloud base image and
  replace the 0.5.2 RPM with a 0.6.2 RPM that I downloaded from some
  CentOS CI repository.

  Steps to reproduce
  ==

  Follow instructions from 
https://docs.openstack.org/tripleo-docs/latest/install/installation/installing.html
 to install an OpenStack Undercloud using Newton, and either download the 
Overcloud base images from here 
  https://images.rdoproject.org/newton/delorean/consistent/testing/ or build 
them yourself directly from the Undercloud.

  In any case, the Overcloud base image ships with noVNC 0.5.2-1 instead
  of 0.6.*.

  Expected result
  ===

  A newer version of noVNC that does not break the Nova noVNC console.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1752896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752115] Re: detach multiattach volume disconnects innocent bystander

2018-03-02 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Tags added: multiattach volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1752115

Title:
  detach multiattach volume disconnects innocent bystander

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  Detaching a multi-attached lvm volume from one server, causes the
  other server to lose connectivity to the volume. I found this while
  developing a new tempest test to test this scenario.

  - create 2 instances on the same host, both simple instances with ephemeral 
disks
  - create a multi-attach lvm volume, attach to both instances
  - check that you can re-read the partition table from inside each instance 
(via ssh):

 $ sudo blockdev --rereadpt /dev/vdb

This succeeds on both instances (no output or err message is
  returned).

  - detach the volume from one of the instances
  - recheck connectivity. The expected result is that the command will now fail 
in the instance where 
the volume was detached. But it also fails on the instance where the volume 
is still supposedly 
attached:

 $ sudo blockdev --rereadpt /dev/vdb
 BLKRRPART: Input/output error

  cinder & nova still think that the volume is attached correctly:

  $ cinder show 2cf26a15-8937-4654-ba81-70cbcb97a238 | grep attachment
  | attachment_ids | ['f5876aff-5b5b-45a0-a020-515ca339eae4']   

  $ nova show vm1 | grep attached
  | os-extended-volumes:volumes_attached | [{"id": 
"2cf26a15-8937-4654-ba81-70cbcb97a238", "delete_on_termination": false}] |

  cinder version:

  :/opt/stack/cinder$ git show
  commit 015b1053990f00d1522c1074bcd160b4b57a5801
  Merge: 856e636 481535e
  Author: Zuul 
  Date:   Thu Feb 22 14:00:17 2018 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1752115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752824] [NEW] VMware: error while get serial console log

2018-03-02 Thread yan97ao
Public bug reported:

Description
===
We are survey using vmware as nova compute backend and got trouble about get 
vm's serial console log.


Steps to reproduce
==
nova console-log 

Expected result
===
Get the serial console log of VM-UUID

Actual result
=
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-cfd43272-2a1a-4296-b56d-86c394c29e64)

Logs
===
2018-03-02 16:55:12.395 15462 INFO nova.compute.manager 
[req-cfd43272-2a1a-4296-b56d-86c394c29e64 75ee831637464e53a5a5e63dd88671e9 
750d99c807e94c039c694f10ca1bb244 - default default] [instance: 
ff784d6d-178c-4237-9621-92cc52421dc1] Get console output
2018-03-02 16:55:12.396 15462 WARNING nova.virt.vmwareapi.driver 
[req-cfd43272-2a1a-4296-b56d-86c394c29e64 75ee831637464e53a5a5e63dd88671e9 
750d99c807e94c039c694f10ca1bb244 - default default] yangtao 
/opt/vmware/vspc/ff784d6d178c4237962192cc52421dc1
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server 
[req-cfd43272-2a1a-4296-b56d-86c394c29e64 75ee831637464e53a5a5e63dd88671e9 
750d99c807e94c039c694f10ca1bb244 - default default] Exception during message 
handling: TypeError: coercing to Unicode: need string or buffer, file found
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, 
in dispatch
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _do_dispatch
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 232, in 
inner
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 217, in 
decorated_function
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 205, in 
decorated_function
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4653, in 
get_console_output
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server output = 
self.driver.get_console_output(context, instance)
2018-03-02 16:55:12.484 15462 ERROR oslo_messaging.rpc.server   File