On Mon, Jul 07, 2025 at 06:49:02PM +0000, Michael Kelley wrote:
> From: Nam Cao <nam...@linutronix.de> Sent: Monday, July 7, 2025 1:20 AM
> > 
> > Move away from the legacy MSI domain setup, switch to use
> > msi_create_parent_irq_domain().
> > 
> > While doing the conversion, I noticed that hv_compose_msi_msg() is doing
> > more than it is supposed to (composing message). This function also
> > allocates and populates struct tran_int_desc, which should be done in
> > hv_pcie_domain_alloc() instead. It works, but it is not the correct design.
> > However, I have no hardware to test such change, therefore I leave a TODO
> > note.
> > 
> > Acked-by: Bjorn Helgaas <bhelg...@google.com>
> > Reviewed-by: Thomas Gleixner <t...@linutronix.de>
> > Signed-off-by: Nam Cao <nam...@linutronix.de>
> 
> [Adding linux-hyperv@vger.kernel.org so that the Linux on Hyper-V folks
> have visibility.]
> 
> This all looks good to me now. Thanks for the additional explanation of
> the TODO. I understand what you are suggesting. Moving the interaction
> with the Hyper-V host into hv_pcie_domain_alloc() has additional appeal
> because it should eliminate the need for the ugly polling for a VMBus
> response. However, I'm unlikely to be the person implementing the
> TODO. hv_compose_msi_msg() is a real beast of a function, and I lack
> access to hardware to fully test the move, particularly a device that
> does multi MSI. I don't think such a device is available in a VM in the
> Azure public cloud.
> 
> I've tested this patch in an Azure VM that has a MANA NIC. The MANA
> driver has updates in linux-next to use MSIX dynamic allocation, and
> that dynamic allocation appears to work correctly with this patch. My
> testing included unbind and rebind the driver several times so that
> the full round-trip is tested.
> 
> Reviewed-by: Michael Kelley <mhkli...@outlook.com>
> Tested-by: Michael Kelley <mhkli...@outlook.com>

Acked-by: Wei Liu <wei....@kernel.org>

Reply via email to