I have run a test on cosmic. The test involved MAAS 2.4.3 installed on
bionic on 3 of the blades of the UCS chassis in the customer's data
center. I installed cosmic, 18.10 on a 4th blade and installed libvirt
and qemu-kvm and defined a VM similar to how maas defines VMs. with this
xml: https://pastebin.ubuntu.com/p/yCTRGDjx2H/

    $ lsb_release -d
    Description:        Ubuntu 18.10

The ipxe-qemu version installed from dist is:


The attached screenshot is of the failed pxe boot of the testipxe vm.

Added the ppa:andreserl/maas apt repo and installed ipxe-qemu which gave
me version:


Note that I had to edit /etc/apt/sources.list.d/andreserl-ubuntu-maas-
cosmic.list replacing "cosmic" with "bionic" because that repo doesn't
have cosmic packages. And then I had to downgrade the ipxe-qemu because
the cosmic version is greater than the one in the fix repo:

    # apt install ipxe-qemu=1.0.0+git-20180124.fbe8c52d-

Once I jumped through those hoops, I booted the exact same testipxe vm
that failed to pxe boot above and it succeeded in getting an IP and
commission in MAAS.

** Attachment added: "screenshot of failed boot"

You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.

  iPXE ignores vlan 0 traffic

Status in MAAS:
Status in ipxe package in Ubuntu:
  Fix Released
Status in ipxe-qemu-256k-compat package in Ubuntu:
Status in linux package in Ubuntu:
Status in ipxe source package in Trusty:
  Won't Fix
Status in ipxe source package in Xenial:
  Won't Fix
Status in ipxe source package in Bionic:
Status in ipxe source package in Cosmic:
Status in ipxe source package in Disco:
  Fix Released

Bug description:

   * VLAN 0 is special (for QoS actually, not a real VLAN)
   * Some components in the stack accidentally strip it, so does ipxe in
     this case.
   * Fix by porting a fix that is carried by other distributions as upstream
     didn't follow the suggestion but it is needed for the use case affected
     by the bug here (Thanks Andres)

  [Test Case]

   * Comment #42 contains a virtual test setup to understand the case but it 
     does NOT trigger the isse. That requires special switch HW that adds 
     VLAN 0 tags for QoS. Therefore Vern (reporter) will test that on a 
     customer site with such hardware being affected by this issue.

  [Regression Potential]

   * The only reference to VLAN tags on iPXE boot that we found was on iBFT
     boot for SCSI, we tested that in comment #34 and it still worked fine.
   * We didn't see such cases on review, but there might be use cases that
     made some unexpected use of the headers which are now stripped. But
     that seems wrong.

  [Other Info]

   * n/a


  I have three MAAS rack/region nodes which are blades in a Cisco UCS
  chassis. This is an FCE deployment where MAAS has two DHCP servers,
  infra1 is the primary and infra3 is the secondary. The pod VMs on
  infra1 and infra3 PXE boot fine but the pod VMs on infra2 fail to PXE
  boot. If I reconfigure the subnet to provide DHCP on infra2 (either as
  primary or secondary) then the pod VMs on infra2 will PXE boot but the
  pod VMs on the demoted infra node (that no longer serves DHCP) now
  fail to PXE boot.

  While commissioning a pod VM on infra2 I captured network traffic with
  tcpdump on the vnet interface.

  Here is the dump when the PXE boot fails (no dhcp server on infra2):

  Here is the dump when PXE boot succeeds (when infra2 is serving dhcp):

  The only difference I can see is that in the unsuccessful scenario,
  the reply is an 802.1q packet -- it's got a vlan tag for vlan 0.
  Normally vlan 0 traffic is passed as if it is not tagged and indeed, I
  can ping between the blades with no problem. Outgoing packets are
  untagged but incoming packets are tagged vlan 0 -- but the ping works.
  It seems vlan 0 is used as a part of 802.1p to set priority of
  packets. This is separate from vlan, it just happens to use that
  ethertype to do the priority tagging.

  Someone confirmed to me that, in the iPXE source, it drops all packets
  if they are vlan tagged.

  The customer is unable to figure out why the packets between blades is
  getting vlan tagged so we either need to figure out how to allow iPXE
  to accept vlan 0 or the customer will need to use different equipment
  for the MAAS nodes.

  I found a conversation on the ipxe-devel mailing list that suggested a
  commit was submitted and signed off but that was from 2016 so I'm not
  sure what became of it. Notable messages in the thread:


  Would it be possible to install a local patch as part of the FCE
  deployment? I suspect the patch(es) mentioned in the above thread
  would require some modification to apply properly.

To manage notifications about this bug go to:

Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to