Re: [ovs-discuss] Ovs-dpdk bonding error
Thanks for you reply. This is my profile: kernel: 3.10.0-1160.el7.x86_64 openvswitch: 2.12.0 dpdk: 18.11.8 NIC: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02) driver: i40e firmware-version: 1.67, 0x8fa8, 19.5.12 I’ll try new version. How dpdk config change when server enable sriov with 'intel_iommu=on iommu=pt'? Vào Th 5, 11 thg 3, 2021 lúc 15:54 Finn, Emma đã viết: > Hi, > > > > What specific versions of OvS,DPDK, Kernel and i40e driver are you using? > > Have you tried moving to a new release? I tried with OvS 2.13.1 and DPDK > 19.11.2 and saw no error > > when I created bond port listed below. > > > > Thanks, > > Emma Finn > > > > *From:* discuss *On Behalf Of *KhacThuan > Bk > *Sent:* Tuesday 9 March 2021 16:57 > *To:* ovs-discuss@openvswitch.org > *Subject:* [ovs-discuss] Ovs-dpdk bonding error > > > > Dear All, > > > > I'm using ovs-dpdk with lacp bond_mode=balance-tcp. > > VT-x enabled via intel_iommu=on iommu=pt. > > Sometime, when ovs-dpdk start, it raised exception like below. > > I have tried remove 'intel_iommu=on iommu=pt', it returned success. > > We are ussing 'Intel Corporation Ethernet Controller X710 for 10GbE SFP+' > with ovs 2.12 and dpdk18.11. > > I don't have any info about error code '|dpdk|ERR|eth_i40e_dev_init(): > Failed to do parameter init: -22'. > > Anyone has encounted this probem? > > > > [root@COMPUTE01 admin]# cat /proc/cmdline > > BOOT_IMAGE=/vmlinuz-3.10.0-1160.el7.x86_64 root=/dev/mapper/vg00-lv_root ro > net.ifnames=1 crashkernel=2048M spectre_v2=retpoline rd.lvm.lv > =vg00/lv_root > rd.lvm.lv=vg00/lv_swap rd.lvm.lv=vg00/lv_usr rhgb quiet > default_hugepagesz=1G hugepagesz=1G hugepages=228 intel_iommu=on iommu=pt > isolcpus=2-35,38-71 > > > > > > > > [root@COMPUTE01 admin]# dmesg | grep -e DMAR -e IOMMU > > [ 0.00] ACPI: DMAR 6fffd000 00250 (v01 DELL PE_SC3 > 0001 DELL 0001) > > [ 0.00] DMAR: IOMMU enabled > > [ 1.656143] DMAR: Hardware identity mapping for device :19:00.0 > > [ 1.656145] DMAR: Hardware identity mapping for device :19:00.1 > > > > > > [root@COMPUTE01 admin]# ovs-vsctl add-bond br-vlan bond-vlan em1 em2 > bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true > other_config:bond-detect-mode=miimon other_config:lacp-time=fast > other_config:bond-miimon-interval=100 other_config:bond_updelay=1000 -- set > Interface em1 type=dpdk options:dpdk-devargs=:19:00.0 > other_config:pci_address=:19:00.0 other_config:driver=igb_uio > other_config:previous_driver=i40e -- set Interface em2 type=dpdk > options:dpdk-devargs=:19:00.1 other_config:pci_address=:19:00.1 > other_config:driver=igb_uio other_config:previous_driver=i40e > > > > [root@COMPUTE01 admin]# ovs-vsctl show > > 295dd51d-db1d-463a-b8f9-865580d1f1b1 > > Manager "ptcp:6640:127.0.0.1" > > is_connected: true > > Bridge br-vlan > > Controller "tcp:127.0.0.1:6633" > > is_connected: true > > fail_mode: secure > > datapath_type: netdev > > Port bond-vlan > > Interface "em1" > > type: dpdk > > options: {dpdk-devargs=":19:00.0"} > > error: "Error attaching device ':19:00.0' to DPDK" > > Interface "em2" > > type: dpdk > > options: {dpdk-devargs=":19:00.1"} > > error: "Error attaching device ':19:00.1' to DPDK" > > > > [root@COMPUTE01 admin]# cat /var/log/ovs-vswitchd.log > > 2021-03-08T10:34:01Z|00159|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.12.0 > > 2021-03-08T10:34:04Z|00160|dpdk|INFO|EAL: PCI device :19:00.0 on NUMA > socket 0 > > 2021-03-08T10:34:04Z|00161|dpdk|INFO|EAL: probe driver: 8086:1572 > net_i40e > > 2021-03-08T10:34:04Z|00162|dpdk|WARN|i40e_check_write_global_reg(): i40e > device :19:00.0 changed global register [0x002676fc]. original: > 0x, new: 0x > > 2021-03-08T10:34:04Z|00163|dpdk|WARN|i40e_check_write_global_reg(): i40e > device :19:00.0 changed global register [0x0026770c]. original: > 0x, new: 0x > > 2021-03-08T10:34:04Z|00164|dpdk|WARN|i40e_check_write_global_reg(): i40e > device :19:00.0 changed global register [0x00267710]. original: > 0x, new: 0x > > 2021-03-08T10:34:04Z|00165|dpdk|WARN|i40e_check_write_global_reg(): i40e > device :19:00.0 changed global register [0x00267714]. original: > 0x, new: 0x > > 2021-03-08T10:34:04Z|00166|dpdk|WARN|i40e_check_write_global_reg(): i40e > device
[ovs-discuss] Ovs-dpdk bonding error
Dear All, I'm using ovs-dpdk with lacp bond_mode=balance-tcp. VT-x enabled via intel_iommu=on iommu=pt. Sometime, when ovs-dpdk start, it raised exception like below. I have tried remove 'intel_iommu=on iommu=pt', it returned success. We are ussing 'Intel Corporation Ethernet Controller X710 for 10GbE SFP+' with ovs 2.12 and dpdk18.11. I don't have any info about error code '|dpdk|ERR|eth_i40e_dev_init(): Failed to do parameter init: -22'. Anyone has encounted this probem? [root@COMPUTE01 admin]# cat /proc/cmdline BOOT_IMAGE=/vmlinuz-3.10.0-1160.el7.x86_64 root=/dev/mapper/vg00-lv_root ro net.ifnames=1 crashkernel=2048M spectre_v2=retpoline rd.lvm.lv=vg00/lv_root rd.lvm.lv=vg00/lv_swap rd.lvm.lv=vg00/lv_usr rhgb quiet default_hugepagesz=1G hugepagesz=1G hugepages=228 intel_iommu=on iommu=pt isolcpus=2-35,38-71 [root@COMPUTE01 admin]# dmesg | grep -e DMAR -e IOMMU [ 0.00] ACPI: DMAR 6fffd000 00250 (v01 DELL PE_SC3 0001 DELL 0001) [ 0.00] DMAR: IOMMU enabled [ 1.656143] DMAR: Hardware identity mapping for device :19:00.0 [ 1.656145] DMAR: Hardware identity mapping for device :19:00.1 [root@COMPUTE01 admin]# ovs-vsctl add-bond br-vlan bond-vlan em1 em2 bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true other_config:bond-detect-mode=miimon other_config:lacp-time=fast other_config:bond-miimon-interval=100 other_config:bond_updelay=1000 -- set Interface em1 type=dpdk options:dpdk-devargs=:19:00.0 other_config:pci_address=:19:00.0 other_config:driver=igb_uio other_config:previous_driver=i40e -- set Interface em2 type=dpdk options:dpdk-devargs=:19:00.1 other_config:pci_address=:19:00.1 other_config:driver=igb_uio other_config:previous_driver=i40e [root@COMPUTE01 admin]# ovs-vsctl show 295dd51d-db1d-463a-b8f9-865580d1f1b1 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-vlan Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port bond-vlan Interface "em1" type: dpdk options: {dpdk-devargs=":19:00.0"} error: "Error attaching device ':19:00.0' to DPDK" Interface "em2" type: dpdk options: {dpdk-devargs=":19:00.1"} error: "Error attaching device ':19:00.1' to DPDK" [root@COMPUTE01 admin]# cat /var/log/ovs-vswitchd.log 2021-03-08T10:34:01Z|00159|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.12.0 2021-03-08T10:34:04Z|00160|dpdk|INFO|EAL: PCI device :19:00.0 on NUMA socket 0 2021-03-08T10:34:04Z|00161|dpdk|INFO|EAL: probe driver: 8086:1572 net_i40e 2021-03-08T10:34:04Z|00162|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x002676fc]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00163|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x0026770c]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00164|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x00267710]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00165|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x00267714]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00166|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x0026771c]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00167|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x00267724]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00168|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x0026774c]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00169|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x0026775c]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00170|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x00267760]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00171|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x00267764]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00172|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x0026776c]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00173|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x00267774]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00174|dpdk|WARN|i40e_check_write_global_reg(): i40e device :19:00.0 changed global register [0x002677f8]. original: 0x, new: 0x 2021-03-08T10:34:04Z|00175|dpdk|WARN|i40e_aq_debug_write_global_register(): i40e device :19:00.0 changed global register [0x0026c7a0]. original: 0x0, after: 0x28 2021-03-08T10:34:04Z|00176|dpdk|ERR|i40e_pf_parameter_init(
[ovs-discuss] OpenvSwitch Message too large
Hi all, I'm using ovs with bonding interface. All interface is using mtu 1500. But in ovs-vswitchd.log i saw some log about "Message too long". I cannot use ovs-tcpdump to trace message. Any one used face this problem can tell my what is problem. I have some info follow this. less /var/log/openvswitch/ovs-vswitchd.log 2020-10-16T10:26:34.876Z|5804855|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.883Z|5804856|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.888Z|5804857|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.888Z|5804858|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.888Z|5804859|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.936Z|5804860|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.936Z|5804861|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.936Z|5804862|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.947Z|5804863|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:35.179Z|5804864|dpif_netdev|ERR|error receiving data from bond0: Message too long [root@host1 ~]# ovs-vsctl show a861ee0a-595a-462a-8729-68de6a9026d8 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br0 Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port phy-br0 Interface phy-br0 type: patch options: {peer=int-br0} Port "bond0" Interface "bond0" Port br0 Interface br0 type: internal [root@host1 ~]# ip link show bond0 10: bond0: mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether e4:43:4b:ed:33:20 brd ff:ff:ff:ff:ff:ff [root@host1 ~]# ovs-vsctl list interface bond0 _uuid : 89181c9b-f123-48fb-97ed-3252e39f41ce admin_state : up bfd : {} bfd_status : {} cfm_fault : [] cfm_fault_status: [] cfm_flap_count : [] cfm_health : [] cfm_mpid: [] cfm_remote_mpids: [] cfm_remote_opstate : [] duplex : [] error : [] external_ids: {} ifindex : 10 ingress_policing_burst: 0 ingress_policing_rate: 0 lacp_current: [] link_resets : 0 link_speed : [] link_state : up lldp: {} mac : [] mac_in_use : "e4:43:4b:ed:33:20" mtu : 1500 mtu_request : 1500 name: "bond0" ofport : 3 ofport_request : [] options : {} other_config: {} statistics : {collisions=0, rx_bytes=428325175126, rx_crc_err=0, rx_dropped=3644084274, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=2667110862, tx_bytes=345128207711, tx_dropped=0, tx_errors=0, tx_packets=563989135} status : {driver_name=bonding, driver_version="3.7.1", firmware_version="2"} type: "" [root@host2 ~]# ovs-vsctl list interface br0 _uuid : c3df3fc1-b78d-464f-af39-1fee5aff12dc admin_state : down bfd : {} bfd_status : {} cfm_fault : [] cfm_fault_status: [] cfm_flap_count : [] cfm_health : [] cfm_mpid: [] cfm_remote_mpids: [] cfm_remote_opstate : [] duplex : full error : [] external_ids: {} ifindex : 24 ingress_policing_burst: 0 ingress_policing_rate: 0 lacp_current: [] link_resets : 0 link_speed : 1000 link_state : down lldp: {} mac : [] mac_in_use : "e4:43:4b:ec:c5:60" mtu : 1500 mtu_request : [] name: br0 ofport : 65534 ofport_request : [] options : {} other_config: {} statistics : {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=0, tx_dropped=1000184, tx_errors=0, tx_packets=0} status : {driver_name=tun, driver_version="1.6", firmware_version=""} type: internal ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
[ovs-discuss] OpenvSwitch Message too large
Hi all, I'm using ovs with bonding interface. All interface is using mtu 1500. But in ovs-vswitchd.log i saw some log about "Message too long". I cannot use ovs-tcpdump to trace message. Any one used face this problem can tell my what is problem. I have some info follow this. less /var/log/openvswitch/ovs-vswitchd.log 2020-10-16T10:26:34.876Z|5804855|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.883Z|5804856|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.888Z|5804857|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.888Z|5804858|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.888Z|5804859|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.936Z|5804860|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.936Z|5804861|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.936Z|5804862|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:34.947Z|5804863|dpif_netdev|ERR|error receiving data from bond0: Message too long 2020-10-16T10:26:35.179Z|5804864|dpif_netdev|ERR|error receiving data from bond0: Message too long [root@host1 ~]# ovs-vsctl show a861ee0a-595a-462a-8729-68de6a9026d8 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br0 Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure datapath_type: netdev Port phy-br0 Interface phy-br0 type: patch options: {peer=int-br0} Port "bond0" Interface "bond0" Port br0 Interface br0 type: internal [root@host1 ~]# ip link show bond0 10: bond0: mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether e4:43:4b:ed:33:20 brd ff:ff:ff:ff:ff:ff [root@host1 ~]# ovs-vsctl list interface bond0 _uuid : 89181c9b-f123-48fb-97ed-3252e39f41ce admin_state : up bfd : {} bfd_status : {} cfm_fault : [] cfm_fault_status: [] cfm_flap_count : [] cfm_health : [] cfm_mpid: [] cfm_remote_mpids: [] cfm_remote_opstate : [] duplex : [] error : [] external_ids: {} ifindex : 10 ingress_policing_burst: 0 ingress_policing_rate: 0 lacp_current: [] link_resets : 0 link_speed : [] link_state : up lldp: {} mac : [] mac_in_use : "e4:43:4b:ed:33:20" mtu : 1500 mtu_request : 1500 name: "bond0" ofport : 3 ofport_request : [] options : {} other_config: {} statistics : {collisions=0, rx_bytes=428325175126, rx_crc_err=0, rx_dropped=3644084274, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=2667110862, tx_bytes=345128207711, tx_dropped=0, tx_errors=0, tx_packets=563989135} status : {driver_name=bonding, driver_version="3.7.1", firmware_version="2"} type: "" [root@host2 ~]# ovs-vsctl list interface br0 _uuid : c3df3fc1-b78d-464f-af39-1fee5aff12dc admin_state : down bfd : {} bfd_status : {} cfm_fault : [] cfm_fault_status: [] cfm_flap_count : [] cfm_health : [] cfm_mpid: [] cfm_remote_mpids: [] cfm_remote_opstate : [] duplex : full error : [] external_ids: {} ifindex : 24 ingress_policing_burst: 0 ingress_policing_rate: 0 lacp_current: [] link_resets : 0 link_speed : 1000 link_state : down lldp: {} mac : [] mac_in_use : "e4:43:4b:ec:c5:60" mtu : 1500 mtu_request : [] name: br0 ofport : 65534 ofport_request : [] options : {} other_config: {} statistics : {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=0, tx_dropped=1000184, tx_errors=0, tx_packets=0} status : {driver_name=tun, driver_version="1.6", firmware_version=""} type: internal *Email: khacthuan@gmail.com * ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss