On Tue, Oct 15, 2019 at 03:12:29PM -0300, Jason Gunthorpe wrote:
> +static void mn_itree_release(struct mmu_notifier_mm *mmn_mm,
> + struct mm_struct *mm)
> +{
> + struct mmu_notifier_range range = {
> + .flags = MMU_NOTIFIER_RANGE_BLOCKABLE,
> +
On Mon, Oct 21, 2019 at 03:11:57PM -0400, Jerome Glisse wrote:
> > Since that reader is not locked we need release semantics here to
> > ensure the unlocked reader sees a fully initinalized mmu_notifier_mm
> > structure when it observes the pointer.
>
> I thought the mm_take_all_locks() would
On Mon, Oct 21, 2019 at 02:30:56PM -0400, Jerome Glisse wrote:
> > +/**
> > + * mmu_range_read_retry - End a read side critical section against a VA
> > range
> > + * mrn: The range under lock
> > + * seq: The return of the paired mmu_range_read_begin()
> > + *
> > + * This MUST be called under
On Mon, Oct 21, 2019 at 07:24:53PM +, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 03:11:57PM -0400, Jerome Glisse wrote:
> > > Since that reader is not locked we need release semantics here to
> > > ensure the unlocked reader sees a fully initinalized mmu_notifier_mm
> > > structure when
On Mon, Oct 21, 2019 at 06:54:25PM +, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 02:30:56PM -0400, Jerome Glisse wrote:
>
> > > +/**
> > > + * mmu_range_read_retry - End a read side critical section against a VA
> > > range
> > > + * mrn: The range under lock
> > > + * seq: The return
On Tue, Oct 15, 2019 at 03:12:29PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe
>
> Of the 13 users of mmu_notifiers, 8 of them use only
> invalidate_range_start/end() and immediately intersect the
> mmu_notifier_range with some kind of internal list of VAs. 4 use an
> interval tree
From: Jason Gunthorpe
Of the 13 users of mmu_notifiers, 8 of them use only
invalidate_range_start/end() and immediately intersect the
mmu_notifier_range with some kind of internal list of VAs. 4 use an
interval tree (i915_gem, radeon_mn, umem_odp, hfi1). 4 use a linked list
of some kind