On 4/7/25 13:02, Stewart Hildebrand wrote:
> On 3/30/25 17:59, Julien Grall wrote:
>> Hi Stewart,
>>
>> I realize this series is at v18, but there are a few bits security-wise
>> that concerns me. They don't have to be handled now because vPCI is
>> still in tech preview. But we probably want to keep track of any issues
>> if this hasn't yet been done in the code.
> 
> No worries, we want to get this right. Thank you for taking a look.
> 
>> On 25/03/2025 17:27, Stewart Hildebrand wrote:
>>> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
>>> index 1e6aa5d799b9..49c9444195d7 100644
>>> --- a/xen/drivers/vpci/vpci.c
>>> +++ b/xen/drivers/vpci/vpci.c
>>> @@ -81,6 +81,32 @@ static int assign_virtual_sbdf(struct pci_dev *pdev)
>>>       return 0;
>>>   }
>>>   +/*
>>> + * Find the physical device which is mapped to the virtual device
>>> + * and translate virtual SBDF to the physical one.
>>> + */
>>> +bool vpci_translate_virtual_device(struct domain *d, pci_sbdf_t *sbdf)
>>> +{
>>> +    const struct pci_dev *pdev;
>>> +
>>> +    ASSERT(!is_hardware_domain(d));
>>> +    read_lock(&d->pci_lock);
>>> +
>>> +    for_each_pdev ( d, pdev )
>>
>> I can't remember whether this was discussed before (there is no
>> indication in the commit message). What's the maximum number of PCI
>> devices supported? Do we limit it?
>>
>> Asking because iterating through the list could end up to be quite
>> expensive.
> 
> 32. See xen/include/xen/vpci.h:
> #define VPCI_MAX_VIRT_DEV       (PCI_SLOT(~0) + 1)
> 
>> Also, not a change for this patch, but it seems vcpi_{read,write} will
>> also do another lookup. This is quite innefficient. We should consider
>> return a pdev and use it for vcpi_{read,write}.
> 
> Hm. Right. Indeed, a future consideration.
> 
>>> +    {
>>> +        if ( pdev->vpci && (pdev->vpci->guest_sbdf.sbdf == sbdf->sbdf) )
>>> +        {
>>> +            /* Replace guest SBDF with the physical one. */
>>> +            *sbdf = pdev->sbdf;
>>> +            read_unlock(&d->pci_lock);
>>> +            return true;
>>> +        }
>>> +    }
>>> +
>>> +    read_unlock(&d->pci_lock);
>>
>> At the point this is unlocked, what prevent the sbdf to be detached from
>> the domain?
> 
> Nothing.
> 
>> If nothing, would this mean the domain can access something it should
>> not?
> 
> Yep. I have a patch in the works to acquire the lock in the I/O
> handlers. I would prefer doing this in a pre-patch so that we don't
> temporarily introduce the race condition.

I spoke too soon. If the pdev were deassigned right after dropping the
lock here, the access would get rejected in the 2nd pdev lookup inside
vpci_{read,write}, due to the pdev not belonging to the domain any more.

In hindsight, moving the lock (as I did in v19) was not strictly
necessary. Anyway, this can all be simplified by calling
vpci_translate_virtual_device() from within vpci_{read,write}. I'll send
v20 with this approach, then we can decide if we like v18 or v20 better.

>>> +    return false;
>>> +}
>>> +
>>>   #endif /* CONFIG_HAS_VPCI_GUEST_SUPPORT */
>>>     void vpci_deassign_device(struct pci_dev *pdev)
>>> diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
>>> index 807401b2eaa2..e355329913ef 100644
>>> --- a/xen/include/xen/vpci.h
>>> +++ b/xen/include/xen/vpci.h
>>> @@ -311,6 +311,18 @@ static inline int __must_check 
>>> vpci_reset_device(struct pci_dev *pdev)
>>>       return vpci_assign_device(pdev);
>>>   }
>>>   +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
>>> +bool vpci_translate_virtual_device(struct domain *d, pci_sbdf_t *sbdf);
>>> +#else
>>> +static inline bool vpci_translate_virtual_device(struct domain *d,
>>> +                                                 pci_sbdf_t *sbdf)
>>> +{
>>> +    ASSERT_UNREACHABLE();
>>> +
>>> +    return false;
>>> +}
>>> +#endif
>>> +
>>>   #endif
>>>     /*
> 
> 


Reply via email to