Have you also performed the modification of txonly.c that Microsoft recommends 
on that page?

“When you're running the previous commands on a virtual machine, change 
IP_SRC_ADDR and IP_DST_ADDR in app/test-pmd/txonly.c to match the actual IP 
address of the virtual machines before you compile. Otherwise, the packets are 
dropped before reaching the forwarder.”

Keep in mind that in Azure you do not have a true L2 network between two 
interfaces even on the same subnet, it’s all routed via the subnet gateway 
(x.x.x.1, mac addr 12:34:56:78:9a:bc). I would not expect an L2 forwarding app 
to behave in the same way as a regular VM or hardware.

I haven’t personally used testpmd in this way in Azure, but I’ve used 
dpdk-pktgen and it took some effort to get traffic to go to the right place.

Josh

From: Nandini Rangaswamy <[email protected]>
Date: Tuesday, June 4, 2024 at 5:41 PM
To: [email protected] <[email protected]>
Subject: DPDK Netvsc - Observing very low throughput while running Testpmd
External Email: This message originated outside of NETSCOUT. Do not click links 
or open attachments unless you recognize the sender and know the content is 
safe.

Hello,



I am trying to set up dpdk with netvsc as master pmd on Azure following

https://learn.microsoft.com/en-us/azure/virtual-network/setup-dpdk?tabs=ubuntu<https://urldefense.com/v3/__https:/learn.microsoft.com/en-us/azure/virtual-network/setup-dpdk?tabs=ubuntu__;!!Nzg7nt7_!DIO3zKw40G4AxE_-evQ3vl9Vxp7PnlZbGUeO13VIodyHztwPRtH-KWrulxZLFxiX8ju5ZLv_j7a0OwB861hb45HZEaLQ4VzvPQ$>

and

https://doc.dpdk.org/guides-22.11/nics/netvsc.html<https://urldefense.com/v3/__https:/doc.dpdk.org/guides-22.11/nics/netvsc.html__;!!Nzg7nt7_!DIO3zKw40G4AxE_-evQ3vl9Vxp7PnlZbGUeO13VIodyHztwPRtH-KWrulxZLFxiX8ju5ZLv_j7a0OwB861hb45HZEaJPLi_Wcg$>.



On the Azure VM, I have a LAN and a WAN interface with accelerated networking 
enabled.I have unbound both the VMBUS devices from the kernel and bound it to 
uio_hv_generic.The DPDK version is 22.11 running on openwrt 5.15.150.

When I try running testpmd in io mode to send and receive traffic between LAN 
and WAN port, I notice very low throughput.  Please find the testpmd command 
and stats below:

/opt/vc/bin/dpdk-testpmd -l 1-3 -n 1 -a f030:00:02.0  -a 2334:00:02.0 -- 
--rxq=1 --txq=1  -i
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Debug dataplane logs available - lower performance
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1016) device: 2334:00:02.0 (socket -1)
mlx5_net: No available register for sampler.
EAL: Probe PCI driver: mlx5_pci (15b3:1016) device: f030:00:02.0 (socket -1)
mlx5_net: No available register for sampler.
hn_vf_attach(): found matching VF port 0
hn_vf_attach(): found matching VF port 1
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=326912, size=2560, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 2 (socket 0)
Port 2: 00:0D:3A:42:F8:3C
Configuring Port 3 (socket 0)
Port 3: 00:0D:3A:42:FB:CD
Checking link statuses...
Done
testpmd> start tx_first
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP 
allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
  RX P=2/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
  RX P=3/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 2: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=256 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 3: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=256 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> show port stats all

  ######################## NIC statistics for port 2  ########################
  RX-packets: 34         RX-missed: 0          RX-bytes:  2194
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 32         TX-errors: 0          TX-bytes:  2048

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 3  ########################
  RX-packets: 1          RX-missed: 0          RX-bytes:  86
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 64         TX-errors: 0          TX-bytes:  4096

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> show port stats all

  ######################## NIC statistics for port 2  ########################
  RX-packets: 34         RX-missed: 0          RX-bytes:  2194
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 32         TX-errors: 0          TX-bytes:  2048

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 3  ########################
  RX-packets: 1          RX-missed: 0          RX-bytes:  86
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 64         TX-errors: 0          TX-bytes:  4096

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> show port stats all

  ######################## NIC statistics for port 2  ########################
  RX-packets: 34         RX-missed: 0          RX-bytes:  2194
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 32         TX-errors: 0          TX-bytes:  2048

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 3  ########################
  RX-packets: 1          RX-missed: 0          RX-bytes:  86
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 64         TX-errors: 0          TX-bytes:  4096

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 2  ----------------------
  RX-packets: 32             RX-dropped: 0             RX-total: 32
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 3  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 64             TX-dropped: 0             TX-total: 64
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 32             RX-dropped: 0             RX-total: 32
  TX-packets: 96             TX-dropped: 0             TX-total: 96
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 2...
Stopping ports...
Done

Stopping port 3...
Stopping ports...
Done

Shutting down port 2...
Closing ports...
Port 0 is closed
Port 2 is closed
Done

Shutting down port 3...
Closing ports...
Port 1 is closed
Port 3 is closed
Done

Bye...

If I try with 2 queues , the throughput only slightly improves. I expected to 
see larger values.

After enabling debug logs i observe that

1. Both the VMbus devices are being probed and matching VF devices found

2. VF devices are being configured with Rx and Tx queue setup.

Any ideas what I might be doing wrong?

Regards,

Nandini

This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for the 
use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are not 
the intended recipient or the person responsible for delivering the e-mail to 
the intended recipient, you are hereby notified that any use, copying, 
distributing, dissemination, forwarding, printing, or copying of this e-mail is 
strictly prohibited. If you received this e-mail in error, please return the 
e-mail to the sender, delete it from your computer, and destroy any printed 
copy of it.

Reply via email to