On Wed, Nov 03, 2021 at 06:34:16AM +0000, Oleksandr Andrushchenko wrote:
> Hi, Roger
> 
> On 26.10.21 14:33, Roger Pau Monné wrote:
> > On Thu, Sep 30, 2021 at 10:52:22AM +0300, Oleksandr Andrushchenko wrote:
> >> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> >> index 43b8a0817076..33033a3a8f8d 100644
> >> --- a/xen/include/xen/pci.h
> >> +++ b/xen/include/xen/pci.h
> >> @@ -137,6 +137,24 @@ struct pci_dev {
> >>       struct vpci *vpci;
> >>   };
> >>   
> >> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
> >> +struct vpci_dev {
> >> +    struct list_head list;
> >> +    /* Physical PCI device this virtual device is connected to. */
> >> +    const struct pci_dev *pdev;
> >> +    /* Virtual SBDF of the device. */
> >> +    union {
> >> +        struct {
> >> +            uint8_t devfn;
> >> +            uint8_t bus;
> >> +            uint16_t seg;
> >> +        };
> >> +        pci_sbdf_t sbdf;
> >> +    };
> >> +    struct domain *domain;
> >> +};
> >> +#endif
> > I wonder whether this is strictly needed. Won't it be enough to store
> > the virtual (ie: guest) sbdf inside the existing vpci struct?
> >
> > It would avoid the overhead of the translation you do from pdev ->
> > vdev, and there doesn't seem to be anything relevant stored in
> > vpci_dev apart from the virtual sbdf.
> TL;DR It seems it might be needed from performance POV. If not implemented
> for every MMIO trap we use a global PCI lock, e.g. pcidevs_{lock|unlock}.
> Note: pcidevs' lock is a recursive lock
> 
> There are 2 sources of access to virtual devices:
> 1. During initialization when we add, assign or de-assign a PCI device
> 2. At run-time when we trap configuration space access and need to
> translate virtual SBDF into physical SBDF
> 3. At least de-assign can run concurrently with MMIO handlers
> 
> Now let's see which locks are in use while doing that.
> 
> 1. No struct vpci_dev is used.
> 1.1. We remove the structure and just add pdev->vpci->guest_sbdf as you 
> suggest
> 1.2. To protect virtual devices we use pcidevs_{lock|unlock}
> 1.3. Locking happens on system level
> 
> 2. struct vpci_dev is used
> 2.1. We have a per-domain lock vdev_lock
> 2.2. Locking happens on per domain level
> 
> To compare the two:
> 
> 1. Without vpci_dev
> pros: much simpler code
> pros/cons: global lock is used during MMIO handling, but it is a recursive 
> lock
> 
> 2. With vpc_dev
> pros: per-domain locking
> cons: more code
> 
> I have implemented the two methods and we need to decide
> which route we go.

We could always see about converting the pcidevs lock into a rw one if
it turns out there's too much contention. PCI config space accesses
shouldn't be that common or performance critical, so having some
contention might not be noticeable.

TBH I would start with the simpler solution (add guest_sbdf and use
pci lock) and move to something more complex once issues are
identified.

Regards, Roger.

Reply via email to