On 01.03.2022 14:34, Rahul Singh wrote:
>> On 24 Feb 2022, at 2:57 pm, Jan Beulich <jbeul...@suse.com> wrote:
>>
>> On 15.02.2022 16:25, Rahul Singh wrote:
>>> vpci/msix.c file will be used for arm architecture when vpci msix
>>> support will be added to ARM, but there is x86 specific code in this
>>> file.
>>>
>>> Move x86 specific code to the x86/hvm/vmsi.c file to make sure common
>>> code will be used for other architecture.
>>
>> Could you provide some criteria by which code is considered x86-specific
>> (or not)? For example ...
> 
> Code moved to x86 file is based on criteria that either the code will be 
> unusable or will be different 
> for ARM when MSIX  support will be introduce for ARM.

That's a very abstract statement, which you can't really derive any
judgement from. It'll be necessary to see in how far the code is
indeed different. After all PCI, MSI, and MSI-X are largely arch-
agnostic.

>>> --- a/xen/arch/x86/hvm/vmsi.c
>>> +++ b/xen/arch/x86/hvm/vmsi.c
>>> @@ -925,4 +925,106 @@ int vpci_msix_arch_print(const struct vpci_msix *msix)
>>>
>>>     return 0;
>>> }
>>> +
>>> +int vpci_make_msix_hole(const struct pci_dev *pdev)
>>> +{
>>> +    struct domain *d = pdev->domain;
>>> +    unsigned int i;
>>> +
>>> +    if ( !pdev->vpci->msix )
>>> +        return 0;
>>> +
>>> +    /* Make sure there's a hole for the MSIX table/PBA in the p2m. */
>>> +    for ( i = 0; i < ARRAY_SIZE(pdev->vpci->msix->tables); i++ )
>>> +    {
>>> +        unsigned long start = PFN_DOWN(vmsix_table_addr(pdev->vpci, i));
>>> +        unsigned long end = PFN_DOWN(vmsix_table_addr(pdev->vpci, i) +
>>> +                                     vmsix_table_size(pdev->vpci, i) - 1);
>>> +
>>> +        for ( ; start <= end; start++ )
>>> +        {
>>> +            p2m_type_t t;
>>> +            mfn_t mfn = get_gfn_query(d, start, &t);
>>> +
>>> +            switch ( t )
>>> +            {
>>> +            case p2m_mmio_dm:
>>> +            case p2m_invalid:
>>> +                break;
>>> +            case p2m_mmio_direct:
>>> +                if ( mfn_x(mfn) == start )
>>> +                {
>>> +                    clear_identity_p2m_entry(d, start);
>>> +                    break;
>>> +                }
>>> +                /* fallthrough. */
>>> +            default:
>>> +                put_gfn(d, start);
>>> +                gprintk(XENLOG_WARNING,
>>> +                        "%pp: existing mapping (mfn: %" PRI_mfn
>>> +                        "type: %d) at %#lx clobbers MSIX MMIO area\n",
>>> +                        &pdev->sbdf, mfn_x(mfn), t, start);
>>> +                return -EEXIST;
>>> +            }
>>> +            put_gfn(d, start);
>>> +        }
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>>
>> ... nothing in this function looks to be x86-specific, except maybe
>> functions like clear_identity_p2m_entry() may not currently be available
>> on Arm. But this doesn't make the code x86-specific.
> 
> I will maybe be wrong but what I understand from the code is that for x86 
> if there is no p2m entries setup for the region, accesses to them will be 
> trapped 
> into the hypervisor and can be handled by specific MMIO handler.
> 
> But for ARM when we are registering the MMIO handler we have to provide 
> the GPA also for the MMIO handler. 

Question is: Is this just an effect resulting from different implementation,
or an inherent requirement? In the former case, harmonizing things may be an
alternative option.

> For ARM arch vpci_make_msix_hole() will be used to register the MMIO handler 
> for the MSIX MMIO region.
> 
> int vpci_make_msix_hole(const struct pci_dev *pdev)
> {
>     struct vpci_msix *msix = pdev->vpci->msix;
>     paddr_t addr,size;
> 
>    for ( int i = 0; msix && i < ARRAY_SIZE(msix->tables); i++ )
>    {                                                                          
>  
>        addr = vmsix_table_addr(pdev->vpci, i);               
>        size = vmsix_table_size(pdev->vpci, i) - 1;                            
>                                              
>        register_mmio_handler(pdev->domain, &vpci_msi_mmio_handler,            
>  
>                                               addr, size, NULL);              
>                   
>     }                                                                         
>                                         
>     return 0;                                                                 
>   
> }
> 
> Therefore in this case there is difference how ARM handle this case.
>  
>>
>>> +struct vpci_msix *vpci_msix_find(const struct domain *d, unsigned long 
>>> addr)
>>> +{
>>> +    struct vpci_msix *msix;
>>> +
>>> +    list_for_each_entry ( msix, &d->arch.hvm.msix_tables, next )
>>> +    {
>>> +        const struct vpci_bar *bars = msix->pdev->vpci->header.bars;
>>> +        unsigned int i;
>>> +
>>> +        for ( i = 0; i < ARRAY_SIZE(msix->tables); i++ )
>>> +            if ( bars[msix->tables[i] & PCI_MSIX_BIRMASK].enabled &&
>>> +                 VMSIX_ADDR_IN_RANGE(addr, msix->pdev->vpci, i) )
>>> +                return msix;
>>> +    }
>>> +
>>> +    return NULL;
>>> +}
>>
>> Or take this one - I don't see anything x86-specific in here. The use
>> of d->arch.hvm merely points out that there may be a field which now
>> needs generalizing.
> 
> Yes, you are right here I can avoid this change if I will introduce 
> "struct list_head msix_tables"  in "d->arch.hvm" for ARM also. 

Wait - if you pass in the guest address at registration time, you
shouldn't have a need for a "find" function.

Jan


Reply via email to