On Fri, Mar 20, 2026 at 05:17:41AM +0000, Wei Liu wrote:
> On Mon, Mar 16, 2026 at 02:07:42PM -0700, Long Li wrote:
> > When hv_pci_assign_numa_node() processes a device that does not have
> > HV_PCI_DEVICE_FLAG_NUMA_AFFINITY set or has an out-of-range
> > virtual_numa_node, the device NUMA node is left unset. On x86_64,
> > the uninitialized default happens to be 0, but on ARM64 it is
> > NUMA_NO_NODE (-1).
> >
> > Tests show that when no NUMA information is available from the Hyper-V
> > host, devices perform best when assigned to node 0. With NUMA_NO_NODE
> > the kernel may spread work across NUMA nodes, which degrades
> > performance on Hyper-V, particularly for high-throughput devices like
> > MANA.
> >
> > Always set the device NUMA node to 0 before the conditional NUMA
> > affinity check, so that devices get a performant default when the host
> > provides no NUMA information, and behavior is consistent on both
> > x86_64 and ARM64.
> >
> > Fixes: 999dd956d838 ("PCI: hv: Add support for protocol 1.3 and support
> > PCI_BUS_RELATIONS2")
> > Signed-off-by: Long Li <[email protected]>
>
> I can pick this up next week. PCI maintainers, if you want this to go
> through your tree instead, please let me know.
Applied to hyperv-fixes.