** Tags removed: verification-needed-jammy verification-needed-kinetic
** Tags added: verification-done-jammy verification-done-kinetic

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/2002445

Title:
  udev NIC renaming race with mlx5_core driver

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Focal:
  Fix Committed
Status in systemd source package in Jammy:
  Fix Committed
Status in systemd source package in Kinetic:
  Fix Committed
Status in systemd source package in Lunar:
  Fix Released

Bug description:
  [Impact]
  On systems with mellanox NICs, udev's NIC renaming races with the mlx5_core 
driver's own configuration of subordinate interfaces. When the kernel wins this 
race, the device cannot be renamed as udev has attempted, and this causes 
systemd-network-online.target to timeout waiting for links to be configured. 
This ultimately results in boot being delayed by about 2 minutes.

  [Test Plan]
  Repeated launches of Standard_D8ds_v5 instance types will generally hit this 
race around 1 in 10 runs. Create a vm snapshot with updated systemd from 
ppa:enr0n/systemd-245. Launch 100 Standard_D8ds_v5 instances with updated 
systemd. Assert not failure in cloud-init status and no 2 minute delay in 
network-online.target.

  To check for failure symptom:
    - Assert that network-online.target isn't the longest pole from 
systemd-analyze blame.

  To assert success condition during net rename busy race:
    - assert when "eth1" is still the primary device name, that two altnames 
are listed (preserving the altname due to the primary NIC rename being hit).

  Sample script uses pycloudlib to create modified base image for test
  and launches 100 VMs of type Standard_D8ds_v5, counting both successes
  and any failures seen.

  #!/usr/bin/env python3
  # This file is part of pycloudlib. See LICENSE file for license information.
  """Basic examples of various lifecycle with an Azure instance."""

  import json
  import logging
  import os
  import sys
  from enum import Enum

  import pycloudlib

  LOG = logging.getLogger()

  base_cfg = """#cloud-config
  ssh-import-id: [chad.smith, enr0n, falcojr, holmanb, aciba]
  """

  #    source: "deb [allow-insecure=yes] 
https://ppa.launchpadcontent.net/enr0n/systemd-245/ubuntu focal main"
  # - apt install systemd udev -y --allow-unauthenticated

  apt_cfg = """
  # Add developer PPA
  apt:
   sources:
     systemd-testing:
       source: {source}
  # upgrade systemd after cloud-init is nearly done
  runcmd:
   - apt install systemd udev -y --allow-unauthenticated
  """

  debug_systemd_cfg = """
  # Create systemd-udev debug override.conf in base image
  write_files:
  - path: /etc/systemd/system/systemd-networkd.service.d/override.conf
    owner: root:root
    defer: {defer}
    content: |
      [Service]
      Environment=SYSTEMD_LOG_LEVEL=debug
      
  - path: /etc/systemd/system/systemd-udevd.service.d/override.conf
    owner: root:root
    defer: {defer}
    content: |
      [Service]
      Environment=SYSTEMD_LOG_LEVEL=debug
      LogRateLimitIntervalSec=0
  """

  cloud_config = base_cfg + apt_cfg + debug_systemd_cfg
  cloud_config2 = base_cfg + debug_systemd_cfg

  
  class BootCondition(Enum):
      SUCCESS_WITHOUT_RENAME_RACE = "network bringup success without rename 
race"
      SUCCESS_WITH_RENAME_RACE = "network bringup success rename race condition"
      ERROR_NETWORK_TIMEOUT = "error: timeout on systemd-networkd-wait-online"

  
  def batch_launch_vm(
      client, instance_type, image_id, user_data, instance_count=5
  ):
      instances = []
      while len(instances) < instance_count:
          instances.append(
              client.launch(
                  image_id=image_id,
                  instance_type=instance_type,
                  user_data=user_data,
              )
          )
      return instances

  
  def get_boot_condition(test_idx, instance):
      blame = instance.execute("systemd-analyze blame").splitlines()
      try:
          LOG.info(
              f"--- Attempt {test_idx} ssh ubuntu@{instance.ip} Blame: 
{blame[0]}"
          )
      except IndexError:
          LOG.warning("--- Attempt {test_idx} Empty blame {blame}?")
          LOG.info(instance.execute("systemd-analyze blame"))
          blame = [""]
      altnames_persisted = False
      ip_addr = json.loads(instance.execute("ip -j addr").stdout)
      rename_race_present = False  # set true when we see eth1 not renamed
      for d in ip_addr:
          if d["ifname"] == "eth1":
              rename_race_present = True
              if len(d.get("altnames", [])) > 1:
                  LOG.info(
                      f"--- SUCCESS persisting altnames {d['altnames']} due to 
rename race on resource busy on {d['ifname']}"
                  )
                  altnames_persisted = True
              else:
                  LOG.error(
                      f"FAILURE: to preserve altnames for {d['ifname']}. Only 
preserved {d.get('altnames', [])}"
                  )
                  LOG.info(
                      instance.execute(
                          "journalctl -u systemd-udevd.service -b 0 --no-pager"
                      )
                  )
      LOG.info(
          "\n".join([f'{d["ifname"]}: {d.get("altnames")}' for d in ip_addr])
      )
      if "systemd-networkd-wait-online.service" not in blame[0]:
          if rename_race_present:
              return BootCondition.SUCCESS_WITH_RENAME_RACE, altnames_persisted
          else:
              LOG.info(f"Destroying instance, normal boot seen: {blame[0]}")
              return (
                  BootCondition.SUCCESS_WITHOUT_RENAME_RACE,
                  altnames_persisted,
              )
      else:
          LOG.info(
              f"--- Attempt {attempt} found delayed instance boot: {blame[0]}: 
ssh ubuntu@{instance.ip}"
          )
          r = instance.execute(
              "journalctl -u systemd-udevd.service -b 0 --no-pager"
          )
          LOG.info(r)
          if "Failure to rename" in str(r):
              LOG.info(f"Found rename refusal!: {r[0]}")
          return BootCondition.ERROR_NETWORK_TIMEOUT, altnames_persisted

  
  def debug_systemd_image_launch_overlake_v5_with_snapshot(
      release="jammy", with_ppa=False
  ):
      """Test overlake v5 timeouts

      test procedure:
      - Launch base jammy image
      - enable ppa:enr0n/systemd-245 and systemd/udev debugging
      - cloud-init clean --logs && deconfigure waalinux agent before shutdown
      - snapshot a base image
      - launch v5 system from snapshot
      - check systemd-analyze for expected timeout
      """
      apt_source = (
          '"deb http://archive.ubuntu.com/ubuntu $RELEASE-proposed main"'
      )
      if with_ppa:
          apt_source = '"deb [allow-insecure=yes] 
https://ppa.launchpadcontent.net/enr0n/{ppa}/ubuntu $RELEASE main"'
          ppas = {
              "focal": "systemd-245",
              "jammy": "systemd-249",
              "kinetic": "systemd-251",
          }
          apt_source = apt_source.format(ppa=ppas.get(release, "systemd"))

      client = pycloudlib.Azure(tag="azure")

      image_id = client.daily_image(release=release)
      pub_path = "/home/ubuntu/.ssh/id_rsa.pub"
      priv_path = "/home/ubuntu/.ssh/id_rsa"

      client.use_key(pub_path, priv_path)

      base_instance = client.launch(
          image_id=image_id,
          instance_type="Standard_DS1_v2",
          user_data=cloud_config.format(defer="true", source=apt_source),
      )

      LOG.info(f"base instance: ssh ubuntu@{base_instance.ip}")
      base_instance.wait()
      LOG.info(base_instance.execute("apt policy systemd"))
      snapshotted_image_id = client.snapshot(base_instance)

      reproducer = False
      success_count_with_race = 0
      success_count_no_race = 0
      failure_count_network_delay = 0
      failure_count_no_altnames = 0
      tests_launched = 0
      TEST_SUMMARY_TMPL = """
      ----- Test run complete: {tests_launched} attempted -----
      Successes without rename race: {success_count_no_race}
      Successes with rename race and preserved altname: 
{success_count_with_race}
      Failures due to network delay: {failure_count_network_delay}
      Failures due to no altnames persisted: {failure_count_no_altnames}
      ===================================
      """
      instances = [base_instance]
      for batch_count in [10] * 10:
          test_instances = batch_launch_vm(
              client=client,
              image_id=snapshotted_image_id,
              instance_type="Standard_D8ds_v5",
              user_data=cloud_config.format(defer="false", source=apt_source),
              instance_count=batch_count,
          )
          for test_idx, instance in enumerate(test_instances, tests_launched):
              LOG.info(f"--- Attempt {test_idx} ssh ubuntu@{instance.ip}")
              instance.wait()
              boot_condition, altnames_persisted = get_boot_condition(
                  test_idx, instance
              )
              if boot_condition == BootCondition.SUCCESS_WITH_RENAME_RACE:
                  instance.delete(wait=False)
                  success_count_with_race += 1
                  if not altnames_persisted:
                      failure_count_no_altnames += 1
              elif boot_condition == BootCondition.SUCCESS_WITHOUT_RENAME_RACE:
                  instance.delete(wait=False)
                  success_count_no_race += 1
                  if not altnames_persisted:
                      failure_count_no_altnames += 1
              elif boot_condition == BootCondition.ERROR_NETWORK_TIMEOUT:
                  instances.append(instance)
                  failure_count_network_delay += 1
                  if not altnames_persisted:
                      failure_count_no_altnames += 1
              else:
                  raise RuntimeError(f"Invalid boot condition: 
{boot_condition}")
          tests_launched += len(test_instances)
      LOG.info(
          TEST_SUMMARY_TMPL.format(
              success_count_with_race=success_count_with_race,
              success_count_no_race=success_count_no_race,
              failure_count_network_delay=failure_count_network_delay,
              failure_count_no_altnames=failure_count_no_altnames,
              tests_launched=tests_launched,
          )
      )
      base_instance.delete(wait=False)

  
  if __name__ == "__main__":
      # Avoid polluting the log with azure info
      logging.getLogger("paramiko").setLevel(logging.WARNING)
      logging.getLogger("pycloudlib").setLevel(logging.WARNING)
      logging.getLogger("adal-python").setLevel(logging.WARNING)
      logging.getLogger("cli.azure.cli.core").setLevel(logging.WARNING)
      release = "jammy" if len(sys.argv) < 2 else sys.argv[1]
      with_ppa = os.environ.get("WITH_PPA", "").lower() in ["y", "true", "1"]
      prefix = "ppa" if with_ppa else "sru"
      logging.basicConfig(
          filename=f"{prefix}-systemd-{release}.log", level=logging.INFO
      )
      debug_systemd_image_launch_overlake_v5_with_snapshot(release, with_ppa)


  [Where problems could occur]
  The patches effectively make it so that if a interface cannot be renamed from 
udev, then the new name is left as an alternative name as a fallback. If 
problems occur, it would be related to device renaming, and particularly 
related to the devices alternative names.

  For Jammy and Kinetic, there are additional patches in udev. These
  patches clean up/revert device properties that were changed as a part
  of the rename attempt. If there were regressions due to these patches,
  we would likely see erroneous device properties (e.g. shown by udevadm
  info) on network devices after a rename failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/2002445/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to     : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to