Re: svn commit: r343291 - in head/sys: dev/vmware/vmxnet3 net
On Sat, Feb 2, 2019 at 3:29 PM Patrick Kelsey wrote: > > > On Sat, Feb 2, 2019 at 9:28 AM Yuri Pankov wrote: > >> Patrick Kelsey wrote: >> > Author: pkelsey >> > Date: Tue Jan 22 01:11:17 2019 >> > New Revision: 343291 >> > URL: https://svnweb.freebsd.org/changeset/base/343291 >> > >> > Log: >> > onvert vmx(4) to being an iflib driver. >> > >> > Also, expose IFLIB_MAX_RX_SEGS to iflib drivers and add >> > iflib_dma_alloc_align() to the iflib API. >> > >> > Performance is generally better with the tunable/sysctl >> > dev.vmx..iflib.tx_abdicate=1. >> > >> > Reviewed by:shurd >> > MFC after: 1 week >> > Relnotes: yes >> > Sponsored by: RG Nets >> > Differential Revision: https://reviews.freebsd.org/D18761 >> >> This breaks vmx interfaces for me in ESXi 6.7 (output below). The >> review mentions setting hw.pci.honor_msi_blacklist="0" and it helps >> indeed -- worth mentioning in UPDATING? >> >> vmx0: port 0x3000-0x300f mem >> 0xfe903000-0xfe903fff,0xfe902000-0xfe902fff,0xfe90-0xfe901fff at >> device 0.0 on pci3 >> vmx0: Using 512 tx descriptors and 256 rx descriptors >> vmx0: msix_init qsets capped at 8 >> vmx0: intr CPUs: 20 queue msgs: 24 admincnt: 1 >> vmx0: Using 8 rx queues 8 tx queues >> vmx0: attempting to allocate 9 MSI-X vectors (25 supported) >> vmx0: failed to allocate 9 MSI-X vectors, err: 6 - using MSI >> vmx0: attempting to allocate 1 MSI vectors (1 supported) >> msi: routing MSI IRQ 25 to local APIC 6 vector 48 >> vmx0: using IRQ 25 for MSI >> vmx0: Using an MSI interrupt >> msi: Assigning MSI IRQ 25 to local APIC 25 vector 48 >> msi: Assigning MSI IRQ 25 to local APIC 24 vector 48 >> vmx0: bpf attached >> vmx0: Ethernet address: 00:00:00:00:00:33 >> vmx0: netmap queues/slots: TX 1/512, RX 1/512 >> vmx0: device enable command failed! >> vmx0: link state changed to UP >> vmx0: device enable command failed! >> >> > Setting hw.pci.honor_msi_blacklist="0" should only be necessary if you > want to operate with more than one queue. If > hw.pci.honor_msi_blacklist="0" is not set, then MSI-X will not be > available, and MSI will be used, which reduces the number of queues that > can be configured for use to 1. This case should work correctly. > > I am able to reproduce the behavior you described above on ESXi 6.7 using > the latest snapshot release (based on r343598). The error that appears in > the ESXi logs will be similar to: > > 2019-02-02T15:14:02.986Z| vcpu-1| I125: VMXNET3 user: failed to activate > 'Ethernet0', status: 0xbad0001 > > which vaguely means 'the device did not like something about the > configuration it was given'. I will see if I can determine the root > cause. Given that enabling MSI-X seems to work around the problem, and > based on other issues I encountered during development, I currently suspect > there is a problem with the interrupt index that is being configured for > the transmit queue in the device configuration structure when using MSI. > > Indeed, the interrupt index for the tx queue in MSI mode was the problem. This is now fixed in r343688 ( https://svnweb.freebsd.org/changeset/base/343688). Thanks for reporting the issue! -Patrick ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
Re: svn commit: r343291 - in head/sys: dev/vmware/vmxnet3 net
On Sat, Feb 2, 2019 at 9:28 AM Yuri Pankov wrote: > Patrick Kelsey wrote: > > Author: pkelsey > > Date: Tue Jan 22 01:11:17 2019 > > New Revision: 343291 > > URL: https://svnweb.freebsd.org/changeset/base/343291 > > > > Log: > > onvert vmx(4) to being an iflib driver. > > > > Also, expose IFLIB_MAX_RX_SEGS to iflib drivers and add > > iflib_dma_alloc_align() to the iflib API. > > > > Performance is generally better with the tunable/sysctl > > dev.vmx..iflib.tx_abdicate=1. > > > > Reviewed by:shurd > > MFC after: 1 week > > Relnotes: yes > > Sponsored by: RG Nets > > Differential Revision: https://reviews.freebsd.org/D18761 > > This breaks vmx interfaces for me in ESXi 6.7 (output below). The > review mentions setting hw.pci.honor_msi_blacklist="0" and it helps > indeed -- worth mentioning in UPDATING? > > vmx0: port 0x3000-0x300f mem > 0xfe903000-0xfe903fff,0xfe902000-0xfe902fff,0xfe90-0xfe901fff at > device 0.0 on pci3 > vmx0: Using 512 tx descriptors and 256 rx descriptors > vmx0: msix_init qsets capped at 8 > vmx0: intr CPUs: 20 queue msgs: 24 admincnt: 1 > vmx0: Using 8 rx queues 8 tx queues > vmx0: attempting to allocate 9 MSI-X vectors (25 supported) > vmx0: failed to allocate 9 MSI-X vectors, err: 6 - using MSI > vmx0: attempting to allocate 1 MSI vectors (1 supported) > msi: routing MSI IRQ 25 to local APIC 6 vector 48 > vmx0: using IRQ 25 for MSI > vmx0: Using an MSI interrupt > msi: Assigning MSI IRQ 25 to local APIC 25 vector 48 > msi: Assigning MSI IRQ 25 to local APIC 24 vector 48 > vmx0: bpf attached > vmx0: Ethernet address: 00:00:00:00:00:33 > vmx0: netmap queues/slots: TX 1/512, RX 1/512 > vmx0: device enable command failed! > vmx0: link state changed to UP > vmx0: device enable command failed! > > Setting hw.pci.honor_msi_blacklist="0" should only be necessary if you want to operate with more than one queue. If hw.pci.honor_msi_blacklist="0" is not set, then MSI-X will not be available, and MSI will be used, which reduces the number of queues that can be configured for use to 1. This case should work correctly. I am able to reproduce the behavior you described above on ESXi 6.7 using the latest snapshot release (based on r343598). The error that appears in the ESXi logs will be similar to: 2019-02-02T15:14:02.986Z| vcpu-1| I125: VMXNET3 user: failed to activate 'Ethernet0', status: 0xbad0001 which vaguely means 'the device did not like something about the configuration it was given'. I will see if I can determine the root cause. Given that enabling MSI-X seems to work around the problem, and based on other issues I encountered during development, I currently suspect there is a problem with the interrupt index that is being configured for the transmit queue in the device configuration structure when using MSI. -Patrick ___ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"
Re: svn commit: r343291 - in head/sys: dev/vmware/vmxnet3 net
Patrick Kelsey wrote: > Author: pkelsey > Date: Tue Jan 22 01:11:17 2019 > New Revision: 343291 > URL: https://svnweb.freebsd.org/changeset/base/343291 > > Log: > onvert vmx(4) to being an iflib driver. > > Also, expose IFLIB_MAX_RX_SEGS to iflib drivers and add > iflib_dma_alloc_align() to the iflib API. > > Performance is generally better with the tunable/sysctl > dev.vmx..iflib.tx_abdicate=1. > > Reviewed by:shurd > MFC after: 1 week > Relnotes: yes > Sponsored by: RG Nets > Differential Revision: https://reviews.freebsd.org/D18761 This breaks vmx interfaces for me in ESXi 6.7 (output below). The review mentions setting hw.pci.honor_msi_blacklist="0" and it helps indeed -- worth mentioning in UPDATING? vmx0: port 0x3000-0x300f mem 0xfe903000-0xfe903fff,0xfe902000-0xfe902fff,0xfe90-0xfe901fff at device 0.0 on pci3 vmx0: Using 512 tx descriptors and 256 rx descriptors vmx0: msix_init qsets capped at 8 vmx0: intr CPUs: 20 queue msgs: 24 admincnt: 1 vmx0: Using 8 rx queues 8 tx queues vmx0: attempting to allocate 9 MSI-X vectors (25 supported) vmx0: failed to allocate 9 MSI-X vectors, err: 6 - using MSI vmx0: attempting to allocate 1 MSI vectors (1 supported) msi: routing MSI IRQ 25 to local APIC 6 vector 48 vmx0: using IRQ 25 for MSI vmx0: Using an MSI interrupt msi: Assigning MSI IRQ 25 to local APIC 25 vector 48 msi: Assigning MSI IRQ 25 to local APIC 24 vector 48 vmx0: bpf attached vmx0: Ethernet address: 00:00:00:00:00:33 vmx0: netmap queues/slots: TX 1/512, RX 1/512 vmx0: device enable command failed! vmx0: link state changed to UP vmx0: device enable command failed! signature.asc Description: OpenPGP digital signature