Hi all/Maxime/Chenbox,

I’m observing an issue with virtio_user (with vhost-net backend) where VLAN
offload doesn't seem to insert the VLAN tag, even though both vlan_tci and
ol_flags are set correctly in the mbuf. We are using DPDK Version: 24.11
(custom build) and virtio_user for exception path traffic.

Context


   -

   *Scenario*: When a packet with vlan_tci set is transmitted via
   virtio_user with RTE_ETH_TX_OFFLOAD_VLAN_INSERT enabled, the packet
   received on the parent TAP interface lacks the VLAN tag. It subsequently
   gets dropped, since it was meant for the TAP sub-interface corresponding to
   the VLAN.
   -

   *Expected*: The packet should carry the VLAN tag when seen on the TAP
   interface, allowing it to be delivered to the correct sub-interface.

Observations


   -

   At the point of calling virtio_xmit_pkts():
   -

      vlan_tci is correctly set (e.g., 61).
      -

      ol_flags contains RTE_MBUF_F_TX_VLAN (bit 0) => value = 0x1.
      -

   We confirmed the following in GDB just before calling
   virtqueue_enqueue_xmit():

(gdb) p **tx_pkts

$6 = { ..., vlan_tci = 61, ..., ol_flags = 65, ... }

(gdb) p /t tx_pkts.ol_flags

$8 = 1000001


The value 65 (0b1000001) confirms that RTE_ETH_TX_OFFLOAD_VLAN_INSERT (bit
0) is set.
Despite this, the packet captured on the TAP interface is a plain Ethernet
frame, with no VLAN tag.

Adding few more outputs more gdb

(gdb) p rte_eth_devices[33]

$4 = {rx_pkt_burst = 0x558180950c36 <virtio_recv_mergeable_pkts>,
tx_pkt_burst = 0x558180952440 <virtio_xmit_pkts>, tx_pkt_prepare =
0x558180951fa3 <virtio_xmit_pkts_prepare>, rx_queue_count = 0x0,
rx_descriptor_status = 0x0,

  tx_queue_count = 0x0, tx_descriptor_status = 0x0, recycle_tx_mbufs_reuse
= 0x0, recycle_rx_descriptors_refill = 0x0, data = 0x60005fd8e080,
process_private = 0x0, dev_ops = 0x55818211e2a0 <virtio_eth_dev_ops>,

  flow_fp_ops = 0x55818218d140 <rte_flow_fp_default_ops>, device =
0x558185a960b0, intr_handle = 0x558185a96160, link_intr_cbs = {tqh_first =
0x0, tqh_last = 0x5581825a42f8 <rte_eth_devices+547128>}, post_rx_burst_cbs
= {

    0x0 <repeats 1024 times>}, pre_tx_burst_cbs = {0x0 <repeats 1024
times>}, state = RTE_ETH_DEV_ATTACHED, security_ctx = 0x0}


(gdb) p *rte_eth_devices[33].data

$6 = {name = "virtio_user0", '\000' <repeats 51 times>, rx_queues =
0x60005fc2ef80, tx_queues = 0x60005fc2cf00, nb_rx_queues = 1, nb_tx_queues
= 1, sriov = {active = 0 '\000', nb_q_per_pool = 0 '\000', def_vmdq_idx =
0, def_pool_q_idx = 0},

  dev_private = 0x60005fc60380, dev_link = {{val64 = 34359738367,
{link_speed = 4294967295, link_duplex = 1, link_autoneg = 1, link_status =
1}}}, dev_conf = {link_speeds = 0, rxmode = {mq_mode = RTE_ETH_MQ_RX_NONE,
mtu = 9000,

      max_lro_pkt_size = 0, offloads = 8193, reserved_64s = {0, 0},
reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = RTE_ETH_MQ_TX_NONE,
offloads = 32813, pvid = 0, hw_vlan_reject_tagged = 0 '\000',
hw_vlan_reject_untagged = 0 '\000',

      hw_vlan_insert_pvid = 0 '\000', reserved_64s = {0, 0}, reserved_ptrs
= {0x0, 0x0}}, lpbk_mode = 0, rx_adv_conf = {rss_conf = {rss_key = 0x0,
rss_key_len = 0 '\000', rss_hf = 0, algorithm =
RTE_ETH_HASH_FUNCTION_DEFAULT},

      vmdq_dcb_conf = {nb_queue_pools = 0, enable_default_pool = 0 '\000',
default_pool = 0 '\000', nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0,
pools = 0} <repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"},
dcb_rx_conf = {

        nb_tcs = 0, dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf
= {nb_queue_pools = 0, enable_default_pool = 0 '\000', default_pool = 0
'\000', enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0,
pool_map = {{

            vlan_id = 0, pools = 0} <repeats 64 times>}}}, tx_adv_conf =
{vmdq_dcb_tx_conf = {nb_queue_pools = 0, dcb_tc =
"\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = 0, dcb_tc =
"\000\000\000\000\000\000\000"}, vmdq_tx_conf = {

        nb_queue_pools = 0}}, dcb_capability_en = 0, intr_conf = {lsc = 0,
rxq = 0, rmv = 0}}, mtu = 9000, min_rx_buf_size = 4294967295,
rx_mbuf_alloc_failed = 0, mac_addrs = 0x60005ff404c0, mac_pool_sel = {0
<repeats 128 times>},

  hash_mac_addrs = 0x0, port_id = 33, promiscuous = 0 '\000', scattered_rx
= 0 '\000', all_multicast = 0 '\000', dev_started = 1 '\001', lro = 0
'\000', dev_configured = 1 '\001', flow_configured = 0 '\000',

  rx_queue_state = "\001", '\000' <repeats 1022 times>, tx_queue_state =
"\001", '\000' <repeats 1022 times>, dev_flags = 0, numa_node = -1,
vlan_filter_conf = {ids = {0 <repeats 64 times>}}, owner = {id = 0,

    name = '\000' <repeats 63 times>}, representor_id = 0, backer_port_id =
64, flow_ops_mutex = {__data = {__lock = 0, __count = 0, __owner = 0,
__nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev =
0x0, __next = 0x0}},

    __size = '\000' <repeats 39 times>, __align = 0}}


(gdb) p *(struct virtio_user_dev*)rte_eth_devices[33].data.dev_private

$3 = {hw = {vqs = 0x60005ff40400, guest_features = 4563441697,
vtnet_hdr_size = 12, started = 1 '\001', weak_barriers = 1 '\001',
vlan_strip = 1 '\001', rx_ol_scatter = true, has_tx_offload = 1 '\001',
has_rx_offload = 0 '\000',

    use_vec_rx = 0 '\000', use_vec_tx = 0 '\000', use_inorder_rx = 0
'\000', use_inorder_tx = 0 '\000', opened = 1 '\001', port_id = 33,
mac_addr = "\000PV\235#g", get_speed_via_feat = false, speed = 4294967295,
duplex = 1 '\001',

    intr_lsc = 1 '\001', max_mtu = 9698, max_rx_pkt_len = 9030, state_lock
= {locked = 0}, inject_pkts = 0x0, max_queue_pairs = 1, rss_rx_queues = 0,
rss_hash_types = 0, rss_reta = 0x0, rss_key = 0x0,

    req_guest_features = 9223372445160806441, cvq = 0x0, use_va = true},
backend_type = VIRTIO_USER_BACKEND_VHOST_KERNEL, is_server = false, callfds
= 0x6000396000c0, kickfds = 0x60005ff40880, mac_specified = 1,
max_queue_pairs = 1,

  queue_pairs = 1, queue_size = 2048, features = 4563441697,
device_features = 4563442051, frontend_features = 32, unsupported_features
= 17293822186582074972, status = 15 '\017', net_status = 0, mac_addr =
"\000PV\235#g",

  path = "/dev/vhost-net", '\000' <repeats 4081 times>, ifname =
0x558185a04b70 "avi_eth1", vrings = {ptr = 0x60005ff40780, split =
0x60005ff40780, packed = 0x60005ff40780}, packed_queues = 0x0, qp_enabled =
0x60005ff406c0,

  ops = 0x55818218cc80 <virtio_ops_kernel>, mutex = {__data = {__lock = 0,
__count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision
= 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39
times>,

    __align = 0}, started = true, hw_cvq = false, scvq = 0x0, backend_data
= 0x558185a96100, notify_area = 0x0}



Workaround

If we explicitly insert the VLAN tag into the packet data (i.e., inline
VLAN insertion), the TAP interface sees the tag as expected, and the packet
is correctly forwarded to the intended sub-interface.
Conclusion

It appears that the VLAN offload flag and vlan_tci are not being honored or
processed correctly somewhere in the virtio path with vhost-net backend.
This leads to silent drops on the host kernel side due to missing VLAN tags.
Request

Could you please confirm if this is a known limitation or a bug? If it’s
the latter, I’d be happy to provide more debug info or steps to reproduce.


Thanks,
Samar Yadav
Broadcom

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to