On Fri, 28 Feb 2020 15:43:04 +0100
Jean-Philippe Brucker <jean-phili...@linaro.org> wrote:

> On Wed, Feb 26, 2020 at 12:35:06PM +0000, Jonathan Cameron wrote:
> > > + * A single Process Address Space ID (PASID) is allocated for each mm. 
> > > In the
> > > + * example, devices use PASID 1 to read/write into address space X and 
> > > PASID 2
> > > + * to read/write into address space Y. Calling iommu_sva_get_pasid() on 
> > > bond 1
> > > + * returns 1, and calling it on bonds 2-4 returns 2.
> > > + *
> > > + * Hardware tables describing this configuration in the IOMMU would 
> > > typically
> > > + * look like this:
> > > + *
> > > + *                                PASID tables
> > > + *                                 of domain A
> > > + *                              .->+--------+
> > > + *                             / 0 |        |-------> io_pgtable
> > > + *                            /    +--------+
> > > + *            Device tables  /   1 |        |-------> pgd X
> > > + *              +--------+  /      +--------+
> > > + *      00:00.0 |      A |-'     2 |        |--.
> > > + *              +--------+         +--------+   \
> > > + *              :        :       3 |        |    \
> > > + *              +--------+         +--------+     --> pgd Y
> > > + *      00:01.0 |      B |--.                    /
> > > + *              +--------+   \                  |
> > > + *      00:01.1 |      B |----+   PASID tables  |
> > > + *              +--------+     \   of domain B  |
> > > + *                              '->+--------+   |
> > > + *                               0 |        |-- | --> io_pgtable
> > > + *                                 +--------+   |
> > > + *                               1 |        |   |
> > > + *                                 +--------+   |
> > > + *                               2 |        |---'
> > > + *                                 +--------+
> > > + *                               3 |        |
> > > + *                                 +--------+
> > > + *
> > > + * With this model, a single call binds all devices in a given domain to 
> > > an
> > > + * address space. Other devices in the domain will get the same bond 
> > > implicitly.
> > > + * However, users must issue one bind() for each device, because IOMMUs 
> > > may
> > > + * implement SVA differently. Furthermore, mandating one bind() per 
> > > device
> > > + * allows the driver to perform sanity-checks on device capabilities.  
> >   
> > > + *
> > > + * In some IOMMUs, one entry of the PASID table (typically the first 
> > > one) can
> > > + * hold non-PASID translations. In this case PASID 0 is reserved and the 
> > > first
> > > + * entry points to the io_pgtable pointer. In other IOMMUs the io_pgtable
> > > + * pointer is held in the device table and PASID 0 is available to the
> > > + * allocator.  
> > 
> > Is it worth hammering home in here that we can only do this because the 
> > PASID space
> > is global (with exception of PASID 0)?  It's a convenient simplification 
> > but not
> > necessarily a hardware restriction so perhaps we should remind people 
> > somewhere in here?  
> 
> I could add this four paragraphs up:
> 
> "A single Process Address Space ID (PASID) is allocated for each mm. It is
> a choice made for the Linux SVA implementation, not a hardware
> restriction."

Perfect.

> 
> > > + */
> > > +
> > > +struct io_mm {
> > > + struct list_head                devices;
> > > + struct mm_struct                *mm;
> > > + struct mmu_notifier             notifier;
> > > +
> > > + /* Late initialization */
> > > + const struct io_mm_ops          *ops;
> > > + void                            *ctx;
> > > + int                             pasid;
> > > +};
> > > +
> > > +#define to_io_mm(mmu_notifier)   container_of(mmu_notifier, struct 
> > > io_mm, notifier)
> > > +#define to_iommu_bond(handle)    container_of(handle, struct iommu_bond, 
> > > sva)  
> > 
> > Code ordering wise, do we want this after the definition of iommu_bond?
> > 
> > For both of these it's a bit non obvious what they come 'from'.
> > I wouldn't naturally assume to_io_mm gets me from notifier to the io_mm
> > for example.  Not sure it matters though if these are only used in a few
> > places.  
> 
> Right, I can rename the first one to mn_to_io_mm(). The second one I think
> might be good enough.

Agreed. The second one does feel more natural.

> 
> 
> > > +static struct iommu_sva *
> > > +io_mm_attach(struct device *dev, struct io_mm *io_mm, void *drvdata)
> > > +{
> > > + int ret = 0;  
> > 
> > I'm fairly sure this is set in all paths below.  Now, of course the
> > compiler might not think that in which case fair enough :)
> >   
> > > + bool attach_domain = true;
> > > + struct iommu_bond *bond, *tmp;
> > > + struct iommu_domain *domain, *other;
> > > + struct iommu_sva_param *param = dev->iommu_param->sva_param;
> > > +
> > > + domain = iommu_get_domain_for_dev(dev);
> > > +
> > > + bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> > > + if (!bond)
> > > +         return ERR_PTR(-ENOMEM);
> > > +
> > > + bond->sva.dev   = dev;
> > > + bond->drvdata   = drvdata;
> > > + refcount_set(&bond->refs, 1);
> > > + RCU_INIT_POINTER(bond->io_mm, io_mm);
> > > +
> > > + mutex_lock(&iommu_sva_lock);
> > > + /* Is it already bound to the device or domain? */
> > > + list_for_each_entry(tmp, &io_mm->devices, mm_head) {
> > > +         if (tmp->sva.dev != dev) {
> > > +                 other = iommu_get_domain_for_dev(tmp->sva.dev);
> > > +                 if (domain == other)
> > > +                         attach_domain = false;
> > > +
> > > +                 continue;
> > > +         }
> > > +
> > > +         if (WARN_ON(tmp->drvdata != drvdata)) {
> > > +                 ret = -EINVAL;
> > > +                 goto err_free;
> > > +         }
> > > +
> > > +         /*
> > > +          * Hold a single io_mm reference per bond. Note that we can't
> > > +          * return an error after this, otherwise the caller would drop
> > > +          * an additional reference to the io_mm.
> > > +          */
> > > +         refcount_inc(&tmp->refs);
> > > +         io_mm_put(io_mm);
> > > +         kfree(bond);  
> > 
> > Free outside the lock would be ever so slightly more logical given we 
> > allocated
> > before taking the lock.
> >   
> > > +         mutex_unlock(&iommu_sva_lock);
> > > +         return &tmp->sva;
> > > + }
> > > +
> > > + list_add_rcu(&bond->mm_head, &io_mm->devices);
> > > + param->nr_bonds++;
> > > + mutex_unlock(&iommu_sva_lock);
> > > +
> > > + ret = io_mm->ops->attach(bond->sva.dev, io_mm->pasid, io_mm->ctx,
> > > +                          attach_domain);
> > > + if (ret)
> > > +         goto err_remove;
> > > +
> > > + return &bond->sva;
> > > +
> > > +err_remove:
> > > + /*
> > > +  * At this point concurrent threads may have started to access the
> > > +  * io_mm->devices list in order to invalidate address ranges, which
> > > +  * requires to free the bond via kfree_rcu()
> > > +  */
> > > + mutex_lock(&iommu_sva_lock);
> > > + param->nr_bonds--;
> > > + list_del_rcu(&bond->mm_head);
> > > +
> > > +err_free:
> > > + mutex_unlock(&iommu_sva_lock);
> > > + kfree_rcu(bond, rcu_head);  
> > 
> > I don't suppose it matters really but we don't need the rcu free if
> > we follow the err_free goto.  Perhaps we are cleaner in this case
> > to not use a unified exit path but do that case inline?  
> 
> Agreed, though I moved the kzalloc() later as suggested by Jacob, I think
> it looks a little better and simplifies the error paths
> 
> Thanks,
> Jean
Jonathan

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to