On Tue, 21 Jun 2022 17:29:38 +0100 Sylvain Vasseur <rem...@gmail.com> wrote:
> Hello, > > Did someone already manage to use DPDK on a GCP VM and have good > performances? I am trying this configuration lately, have some awful > bandwidth results and no idea what I could be doing wrong. > > I use the VirtIO interfaces on my VMs, was able to use both the vfio-pci > and igb_uio PMD. But can only get ~350Mbps (yes bits!) with both, when > trying to transmit data with a very basic testpmd run. > dpdk-testpmd -a 0000:00:05.0 -- --forward-mode=txonly --stats-period 1 > > Any non-dpdk use of the network interface gives me way better results > (GBps!). > > Device: > 00:05.0 Ethernet controller: Red Hat, Inc. Virtio network device > > Bind status > Network devices using DPDK-compatible driver > ============================================ > 0000:00:05.0 'Virtio network device 1000' drv=igb_uio unused=vfio-pci > > I would bet I am doing something wrong, either with the PMD choice, or with > some setting. But can't figure what. Did anyone manage to have good > performances or some experience of using DPDK on a google cloud VM? > > Thanks in advance > Sylvain Check the negotiation of virtio features and checksum offload bits.