** Also affects: ubuntu-z-systems
   Importance: Undecided
       Status: New

** Tags added: openstack-ibm s390x

** Package changed: linux (Ubuntu) => charms

** Changed in: ubuntu-z-systems
     Assignee: (unassigned) => Skipper Bug Screeners (skipper-screen-team)

** Changed in: charms
     Assignee: Skipper Bug Screeners (skipper-screen-team) => (unassigned)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1660597

Title:
  nova-compute-proxy charm does not enable any migration options

Status in Ubuntu on IBM z Systems:
  New
Status in Juju Charms Collection:
  New

Bug description:
  libvirt and firewall modifications to enable live migration on zKVM
  nova compute node

  #=== Problem Description ===================================
  When adding zKVM Nova nodes to a Canonical (Juju) installation of Openstack 
via the nova-compute-proxy charm, no configuration is made for guest migration.

  Errors in /var/log/nova.compute.log are of the sort:
  ERROR nova.virt.libvirt.driver Live Migration failure: Migration error: Your 
libvirt version does not support the VIR_DOMAIN_XML_MIGRATABLE flag or your 
destination node does not support retrieving listen addresses.  In order for 
live migration to work properly you must either disable serial console or 
upgrade your libvirt version.

  I was able to enable live-migration, but it required a number of
  configuration changes; some of which are not recommended for a secure
  installation. Note that this only works for guests booted from cinder
  volumes where cinder is backed by LVM and iSCSI.

  See the attached document for steps to enable live migration between
  two zKVM Nova compute nodes.

  Ideally this would be something that is done automatically by the
  charm, but at a minimum this should be well documented in the charm so
  that a user would be able to implement it in a secure fashion if
  desired.


  #=== Steps to Reproduce ====================================
  #===========================================================
  1. Deploy OpenStack control plane via Canonical distribution (Juju)
  2. Deploy at least 2 zKVM Nova compute nodes via nova-compute-proxy charm
  3. Configure cinder for LVM+iSCSI
  4. Deploy instance booted from a cinder volume
  5. Attempt to perform live migration
  optional workaround: manually configure libvirtd and open the firewall, 
restart libvirtd and nova-compute

  
  #=== Host Details ==========================================
  #===========================================================
  # hostname -f
  zs95kf

  #  cat /etc/system-release
  KVM for IBM z Systems release 1.1.3-beta4.3 (Z)

  == Comment: #3 
  With Mitaka and Newton:
      Live-migration with enabled serial console is *sometimes* possible. 
      The conditions are:
      * The serial console is enabled on source and target host
      * The target host *must not* have the same serial console ports in use
        as the to be migrated instance. 

      The second point is a very long standing bug in Nova [1] which got 
      recently fixed in Ocata [2]. Backports to Newton are proposed with [3].

  With Ocata:
      Live migration with serial console is possible when the serial console 
      is enabled on source and target host.

  The error message you see got introduced with [4] and gets raised at [5]. 
  This message sugar coats the design decision to bind the serial console
  availability to the host (as config option), instead of the instance (with 
  flavor or image extra specs). Upstream is open to change that, but it wasn't
  a priority item of anyone (including me).

  
  References:
  [1] https://bugs.launchpad.net/nova/+bug/1455252
  [2] https://github.com/openstack/nova/commit/898bb133
  [3] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/newton+topic:%22bug+1455252%22
  [4] 
https://github.com/openstack/nova/commit/984cc474efc6ecbeb1240f49479b6439bc9a9416
  [5] 
https://github.com/openstack/nova/blob/eeb23c78914891a5a6943c09c87aceb720d45f58/nova/virt/libvirt/driver.py#L5977-L5987

  == Comment: #4 
  I did forget to mention that backports to upstream Mitaka won't happen as 
they only accept security fixes to that stable branch at this point in time.

  == Comment: #8 
  (In reply to comment #7)
  > 
  > The (unsecure test-dev only) hypervisor settings I chose to test
  > live-migration with OpenStack looks like this bash script:
  > 
  >     # ====================================
  >     # Configure (unsecured) live migration
  >     # ====================================
  >     if [ -d "/etc/libvirt/" ]; then
  >         echo "Configuring live-migration via TCP..."
  >         sed -i '/^\#listen_tls/c\listen_tls = 0' /etc/libvirt/libvirtd.conf
  >         sed -i '/^\#listen_tcp/c\listen_tcp = 1' /etc/libvirt/libvirtd.conf
  >         sed -i '/^\#auth_tcp/c\auth_tcp = "none"' /etc/libvirt/libvirtd.conf
  >         sed -i '/^libvirtd_opts=/c\libvirtd_opts="-d -l"'
  > /etc/default/libvirt-bin # the old (deprecated) way 
  >         sed -i '/^env libvirtd_opts=/c\env libvirtd_opts="-d -l"'
  > /etc/init/libvirt-bin.conf # the new way 
  >         service libvirt-bin restart &>/dev/null
  >         echo "The live-migration via TCP is configured."
  >     fi
  > 
  >     # The live-migration happens with host name resolution, that's why we
  >     # have to add them to the list of known hosts. Using a variable for the
  >     # grep keeps the noise down.
  >     known_hosts=`grep "192.168.56." /etc/hosts`
  >     if [ -z "$known_hosts" ] ; then
  >         echo "192.168.56.150 controller" >> /etc/hosts
  >         echo "192.168.56.151 compute1" >> /etc/hosts
  >         echo "192.168.56.152 compute2" >> /etc/hosts
  >     fi
  > 
  > NOTE:
  > * The IP addresses need to be changed in your case.
  > * The script above is tested in Ubuntu1404, not sure if it fits 100% for
  > Frobisher nodes
  > * The settings above deactivate any security settings and are for dev/test
  > environments only

  This script would work on zKVM with some modification. In the
  attachment in the original post (I'm not sure it made it through the
  BZ-LP mirror), I make the following observations:

  ## Contents of attachment 114738 ##

  To enable live migration between hosts (non-secure, POC only!),
  perform the following actions on each compute host:

  Ensure that name resolution is working.
  Use DNS for this if you can, otherwise edit /etc/hosts appropriately.

  Open up the firewall to allow libvirt to communicate:
  firewall-cmd --zone=public --add-port=16509/tcp --permanent
  firewall-cmd --zone=public --add-port=49152-49261/tcp --permanent

  Confirm that the ports are now open:
  firewall-cmd --zone=public --list-ports 
  49152-49261/tcp 16509/tcp

  Edit /etc/libvirt/libvirtd.conf and set:
  listen_tls = 0
  listen_tcp = 1
  auth_tcp = "none"

  Edit /etc/sysconfig/libvirtd and set:
  LIBVIRTD_ARGS="--listen"

  Restart libvirtd:
  systemctl restart libvirtd

  Check that sockets are created:
  netstat -lnp | grep 16509
  tcp        0      0 0.0.0.0:16509           0.0.0.0:*               LISTEN    
  114788/libvirtd     
  tcp6       0      0 :::16509                :::*                    LISTEN    
  114788/libvirtd

  == Comment: #15 ==
  Live-migration options should be configurable in the nova-compute proxy charm 
to allow at least libvirt live migration over a secure connection. The config 
options are missing in the proxy-charm 
(https://jujucharms.com/u/openstack-charmers-next/nova-compute-proxy/), but 
available in the nova-compute charm (https://jujucharms.com/nova-compute/).

  Please enable the needed config options (e.g. enable-live-migration,
  migration-auth-type, etc) plus the needed firewall settings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1660597/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to