> On Dec 7, 2020, at 3:02 PM, Damjan Marion <damjan.mar...@gmail.com> wrote: > > > >> On 07.12.2020., at 20:41, Christian Hopps <cho...@chopps.org >> <mailto:cho...@chopps.org>> wrote: >> >> >> >>> On Dec 7, 2020, at 1:44 PM, Damjan Marion <damjan.mar...@gmail.com >>> <mailto:damjan.mar...@gmail.com>> wrote: >>> >>>> >>>> On 07.12.2020., at 17:02, Christian Hopps <cho...@chopps.org >>>> <mailto:cho...@chopps.org>> wrote: >>>> >>> >>> please send me output of: extras/scripts/lsnet script and exact “create >>> interface avf” commands you use…. >> >> PCI Address MAC address Device Name Driver State Speed >> Port Type >> ============ ================= ============== ========== ======== ========== >> ==================== >> 0000:65:00.0 40:a6:b7:4b:62:08 enp101s0f0 i40e down 10000Mb/s >> Direct Attach Copper >> 0000:65:00.1 40:a6:b7:4b:62:09 enp101s0f1 i40e down 10000Mb/s >> Direct Attach Copper >> 0000:65:00.2 40:a6:b7:4b:62:0a enp101s0f2 i40e down 10000Mb/s >> Direct Attach Copper >> 0000:65:00.3 40:a6:b7:4b:62:0b enp101s0f3 i40e down 10000Mb/s >> Direct Attach Copper >> 0000:b3:00.0 00:e0:8d:7e:1f:36 enp179s0f0 ixgbe down Unknown! >> Direct Attach Copper >> 0000:b3:00.1 00:e0:8d:7e:1f:37 enp179s0f1 ixgbe down Unknown! >> Direct Attach Copper >> 0000:01:00.0 a0:42:3f:3c:f8:ee enp1s0f0 ixgbe up 10000Mb/s >> Twisted Pair >> 0000:01:00.1 a0:42:3f:3c:f8:ef enp1s0f1 ixgbe down Unknown! >> Twisted Pair >> 0000:17:00.0 f8:f2:1e:3c:15:ec enp23s0f0 ixgbe down Unknown! >> Direct Attach Copper >> 0000:17:00.1 f8:f2:1e:3c:15:ed enp23s0f1 ixgbe down Unknown! >> Direct Attach Copper >> 0000:01:10.0 52:bf:27:59:df:50 eth0 ixgbevf down Unknown! >> Other >> 0000:01:10.2 ee:24:0b:0c:93:3f eth1 ixgbevf down Unknown! >> Other >> 0000:01:10.4 9e:8e:ce:da:38:f5 eth2 ixgbevf down Unknown! >> Other >> 0000:01:10.6 2a:f4:a2:ea:4c:5d eth3 ixgbevf down Unknown! >> Other >> > > I dont see your VFs on the list?Do you have them created? > Do you see them with “lspci | grep Ether”?
[ The VFs are not in the list b/c they are created as part of the automation that also launches vpp, I ran this before running that b/c once vpp is up the VFs also don't show up in that list b/c they have been rebound. :) ] First, things work with 2.12.; however, 2.12 does not load on reboot, I must rmmod and modprobe after rebooting to get the 2.12 driver. I do have another problem though (mentioned at end)... > also, I asked for your crete interface config…. create interface avf 0000:65:0a.0 As mentioned above, when I rmmod/modprobe the new driver I don't hit the lut error anymore. Now I need to figure out why I see such bad performance (tons of rx discards) using these interfaces (when using 2 of them), but not when using 1 avf VF and the other interface is a 10G i520 nic (dpdk driver). Bad Side: $ docker-compose exec p1 vppctl show hard Name Idx Link Hardware avf-0/65/2/0 2 up avf-0/65/2/0 Link speed: 10 Gbps Ethernet address 02:41:0d:0d:0d:0b flags: initialized admin-up vaddr-dma link-up rx-interrupts offload features: l2 adv-link-speed vlan rx-polling rss-pf num-queue-pairs 6 max-vectors 5 max-mtu 0 rss-key-size 52 rss-lut-size 64 speed stats: rx bytes 1520728906 rx unicast 184910147 rx discards 129083610 tx bytes 4226915088 tx unicast 184349288 avf-0/65/a/0 1 up avf-0/65/a/0 Link speed: 10 Gbps Ethernet address 02:41:0b:0b:0b:0b flags: initialized admin-up vaddr-dma link-up rx-interrupts offload features: l2 adv-link-speed vlan rx-polling rss-pf num-queue-pairs 6 max-vectors 5 max-mtu 0 rss-key-size 52 rss-lut-size 64 speed stats: rx bytes 3781507000 rx unicast 93533516 rx broadcast 3 rx discards 73223358 tx bytes 2212324998 tx unicast 13714424 ipsec0 3 up ipsec0 Link speed: unknown IPSec local0 0 down local0 Link speed: unknown local loop0 4 up loop0 Link speed: unknown Ethernet address de:ad:00:00:00:00 Good Side: $ docker-compose exec p2 vppctl show hard Name Idx Link Hardware TenGigabitEthernet17/0/0 1 up TenGigabitEthernet17/0/0 Link speed: 10 Gbps Ethernet address f8:f2:1e:3c:15:ec Intel 82599 carrier up full duplex mtu 9206 flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum Devargs: rx: queues 1 (max 128), desc 1024 (min 32 max 4096 align 8) tx: queues 6 (max 64), desc 1024 (min 32 max 4096 align 8) pci: device 8086:10fb subsystem 8086:0003 address 0000:17:00.00 numa 0 max rx packet len: 15872 promiscuous: unicast off all-multicast on vlan offload: strip off filter off qinq off rx offload avail: vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro macsec-strip vlan-filter vlan-extend jumbo-frame scatter security keep-crc rss-hash rx offload active: ipv4-cksum jumbo-frame scatter tx offload avail: vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum tcp-tso macsec-insert multi-segs security tx offload active: udp-cksum tcp-cksum multi-segs rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp ipv6-udp ipv6-ex ipv6 rss active: none tx burst function: ixgbe_xmit_pkts rx burst function: ixgbe_recv_scattered_pkts_vec tx frames ok 183386223 tx bytes ok 277646741622 rx frames ok 182834391 rx bytes ok 276811244995 extended stats: rx_good_packets 182834488 tx_good_packets 183386223 rx_good_bytes 276811391853 tx_good_bytes 277646741622 rx_q0packets 182834488 rx_q0bytes 276811391853 tx_q0packets 183386223 tx_q0bytes 277646741622 rx_size_65_to_127_packets 15 rx_size_1024_to_max_packets 182834425 rx_multicast_packets 15 rx_total_packets 182834439 rx_total_bytes 276811319181 tx_total_packets 183386223 tx_size_1024_to_max_packets 183386223 out_pkts_untagged 183386223 avf-0/65/e/0 2 up avf-0/65/e/0 Link speed: 10 Gbps Ethernet address 02:42:0c:0c:0c:0c flags: initialized admin-up vaddr-dma link-up rx-interrupts offload features: l2 adv-link-speed vlan rx-polling rss-pf num-queue-pairs 6 max-vectors 5 max-mtu 0 rss-key-size 52 rss-lut-size 64 speed stats: rx bytes 2520527812 rx unicast 92644250 rx broadcast 3 tx bytes 2677106258 tx unicast 20118042 ipsec0 3 up ipsec0 Link speed: unknown IPSec local0 0 down local0 Link speed: unknown local loop0 4 up loop0 Link speed: unknown Ethernet address de:ad:00:00:00:00 Thanks, Chris. > >> >>> >>>> >>>>> What kernel and driver version do you use? >>>> >>>> Host Config: >>>> >>>> $ cat /etc/lsb-release >>>> DISTRIB_ID=Ubuntu >>>> DISTRIB_RELEASE=20.10 >>>> DISTRIB_CODENAME=groovy >>>> DISTRIB_DESCRIPTION="Ubuntu 20.10" >>>> $ uname -a >>>> Linux labnh 5.8.0-31-generic #33-Ubuntu SMP Mon Nov 23 18:44:54 UTC 2020 >>>> x86_64 x86_64 x86_64 GNU/Linux >>>> >>>> Docker Config (compiled and run inside): >>>> >>>> root@p1:/# cat /etc/lsb-release >>>> DISTRIB_ID=Ubuntu >>>> DISTRIB_RELEASE=18.04 >>>> DISTRIB_CODENAME=bionic >>>> DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS" >>>> >>>>> >>>>> Have you tried latest PF driver from intel? >>>> >>>> No; however, I am running such a new ubuntu (5.8 kernel) so I was hoping >>>> that was sufficient. >>> >>> Agree, still, please, do me a favour and try with latest, so I know i’m >>> looking at the same thing. >> >> I did the build and installed the deb, I also updated the firmware on the >> NIC; however, >> >> $ dpkg -l | grep i40 >> ii i40e-dkms 2.12.6 >> all Intel i40e adapter driver >> >> $ sudo ethtool -i enp101s0f0 >> driver: i40e >> version: 2.8.20-k >> firmware-version: 8.15 0x80009621 1.2829.0 >> expansion-rom-version: >> bus-info: 0000:65:00.0 >> supports-statistics: yes >> supports-test: yes >> supports-eeprom-access: yes >> supports-register-dump: yes >> supports-priv-flags: yes >> >> It doesn't seem to have used it. Is there something else I need to do to >> have it use the dkms driver? > > rmmod i40e; modprobe i40e > or reboot…
signature.asc
Description: Message signed with OpenPGP
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#18278): https://lists.fd.io/g/vpp-dev/message/18278 Mute This Topic: https://lists.fd.io/mt/78779141/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-