Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-12 Thread Jan Kara
On Tue 11-10-16 16:51:30, Ross Zwisler wrote:
> On Tue, Oct 11, 2016 at 10:31:52AM +0200, Jan Kara wrote:
> > On Fri 07-10-16 15:09:02, Ross Zwisler wrote:
> > > diff --git a/fs/dax.c b/fs/dax.c
> > > index ac3cd05..e51d51f 100644
> > > --- a/fs/dax.c
> > > +++ b/fs/dax.c
> > > @@ -281,7 +281,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct 
> > > address_space *mapping,
> > >* queue to the start of that PMD.  This ensures that all offsets in
> > >* the range covered by the PMD map to the same bit lock.
> > >*/
> > > - if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > > + if ((unsigned long)entry & RADIX_DAX_PMD)
> > >   index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);
> > 
> > I agree with Christoph - helper for masking type bits would make this
> > nicer.
> 
> Fixed via a dax_flag_test() helper as I outlined in the mail to Christoph.  It
> seems clean to me, but if you or Christoph feel strongly that it would be
> cleaner as a local 'flags' variable, I'll make the change.

One idea I had is that you could have helpers like:

dax_is_pmd_entry()
dax_is_pte_entry()
dax_is_empty_entry()
dax_is_hole_entry()

And then you would use these helpers - all the flags would be hidden in the
helpers so even if we decide to change the flagging scheme to compress
things or so, it should be pretty local change.

> > > - entry = (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY |
> > > -RADIX_DAX_ENTRY_LOCK);
> > > +
> > > + /*
> > > +  * Besides huge zero pages the only other thing that gets
> > > +  * downgraded are empty entries which don't need to be
> > > +  * unmapped.
> > > +  */
> > > + if (pmd_downgrade && ((unsigned long)entry & RADIX_DAX_HZP))
> > > + unmap_mapping_range(mapping,
> > > + (index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > > +
> > >   spin_lock_irq(>tree_lock);
> > > - err = radix_tree_insert(>page_tree, index, entry);
> > > +
> > > + if (pmd_downgrade) {
> > > + radix_tree_delete(>page_tree, index);
> > > + mapping->nrexceptional--;
> > > + dax_wake_mapping_entry_waiter(mapping, index, entry,
> > > + false);
> > 
> > You need to set 'wake_all' argument here to true. Otherwise there could be
> > waiters waiting for non-existent entry forever...
> 
> Interesting.   Fixed, but let me make sure I understand.  So is the issue that
> you could have say 2 tasks waiting on a PMD index that has been rounded down
> to the PMD index via dax_entry_waitqueue()?
> 
> The person holding the lock on the entry would remove the PMD, insert a PTE
> and wake just one of the PMD aligned waiters.  That waiter would wake up, do
> something PTE based (since the PMD space is now polluted with PTEs), and then
> wake any waiters on it's PTE index.  Meanwhile, the second waiter could sleep
> forever on the PMD aligned index.  Is this correct?

Yes.

> So, perhaps more succinctly:
> 
> Thread 1  Thread 2Thread 3
>   
> index 0x202, hold PMD lock 0x200
>   index 0x203, sleep on 0x200
>   index 0x204, sleep on 0x200
> downgrade, removing 0x200
> wake one waiter on 0x200
> insert PTE @ 0x202
>   wake up, grab index 0x203
>   ...
>   wake one waiter on index 0x203
> 
>   ... sleeps forever
> Right?
 
Exactly.

> > > @@ -608,22 +683,28 @@ static void *dax_insert_mapping_entry(struct 
> > > address_space *mapping,
> > >   error = radix_tree_preload(vmf->gfp_mask & ~__GFP_HIGHMEM);
> > >   if (error)
> > >   return ERR_PTR(error);
> > > + } else if (((unsigned long)entry & RADIX_DAX_HZP) &&
> > > + !(flags & RADIX_DAX_HZP)) {
> > > + /* replacing huge zero page with PMD block mapping */
> > > + unmap_mapping_range(mapping,
> > > + (vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > >   }
> > >  
> > >   spin_lock_irq(>tree_lock);
> > > - new_entry = (void *)((unsigned long)RADIX_DAX_ENTRY(sector, false) |
> > > -RADIX_DAX_ENTRY_LOCK);
> > > + new_entry = dax_radix_entry(sector, flags);
> > > +
> > 
> > You've lost the RADIX_DAX_ENTRY_LOCK flag here?
> 
> Oh, nope, that's embedded in the dax_radix_entry() helper:
> 
> /* entries begin locked */
> static inline void *dax_radix_entry(sector_t sector, unsigned long flags)
> {
>   return (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY | flags |
>   ((unsigned long)sector << RADIX_DAX_SHIFT) |
>   RADIX_DAX_ENTRY_LOCK);
> }
> 
> I'll s/dax_radix_entry/dax_radix_locked_entry/ or something to make this
> clearer to the reader.

Yep, that would be better. Thanks!

> > >   if 

Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-12 Thread Jan Kara
On Tue 11-10-16 16:51:30, Ross Zwisler wrote:
> On Tue, Oct 11, 2016 at 10:31:52AM +0200, Jan Kara wrote:
> > On Fri 07-10-16 15:09:02, Ross Zwisler wrote:
> > > diff --git a/fs/dax.c b/fs/dax.c
> > > index ac3cd05..e51d51f 100644
> > > --- a/fs/dax.c
> > > +++ b/fs/dax.c
> > > @@ -281,7 +281,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct 
> > > address_space *mapping,
> > >* queue to the start of that PMD.  This ensures that all offsets in
> > >* the range covered by the PMD map to the same bit lock.
> > >*/
> > > - if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > > + if ((unsigned long)entry & RADIX_DAX_PMD)
> > >   index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);
> > 
> > I agree with Christoph - helper for masking type bits would make this
> > nicer.
> 
> Fixed via a dax_flag_test() helper as I outlined in the mail to Christoph.  It
> seems clean to me, but if you or Christoph feel strongly that it would be
> cleaner as a local 'flags' variable, I'll make the change.

One idea I had is that you could have helpers like:

dax_is_pmd_entry()
dax_is_pte_entry()
dax_is_empty_entry()
dax_is_hole_entry()

And then you would use these helpers - all the flags would be hidden in the
helpers so even if we decide to change the flagging scheme to compress
things or so, it should be pretty local change.

> > > - entry = (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY |
> > > -RADIX_DAX_ENTRY_LOCK);
> > > +
> > > + /*
> > > +  * Besides huge zero pages the only other thing that gets
> > > +  * downgraded are empty entries which don't need to be
> > > +  * unmapped.
> > > +  */
> > > + if (pmd_downgrade && ((unsigned long)entry & RADIX_DAX_HZP))
> > > + unmap_mapping_range(mapping,
> > > + (index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > > +
> > >   spin_lock_irq(>tree_lock);
> > > - err = radix_tree_insert(>page_tree, index, entry);
> > > +
> > > + if (pmd_downgrade) {
> > > + radix_tree_delete(>page_tree, index);
> > > + mapping->nrexceptional--;
> > > + dax_wake_mapping_entry_waiter(mapping, index, entry,
> > > + false);
> > 
> > You need to set 'wake_all' argument here to true. Otherwise there could be
> > waiters waiting for non-existent entry forever...
> 
> Interesting.   Fixed, but let me make sure I understand.  So is the issue that
> you could have say 2 tasks waiting on a PMD index that has been rounded down
> to the PMD index via dax_entry_waitqueue()?
> 
> The person holding the lock on the entry would remove the PMD, insert a PTE
> and wake just one of the PMD aligned waiters.  That waiter would wake up, do
> something PTE based (since the PMD space is now polluted with PTEs), and then
> wake any waiters on it's PTE index.  Meanwhile, the second waiter could sleep
> forever on the PMD aligned index.  Is this correct?

Yes.

> So, perhaps more succinctly:
> 
> Thread 1  Thread 2Thread 3
>   
> index 0x202, hold PMD lock 0x200
>   index 0x203, sleep on 0x200
>   index 0x204, sleep on 0x200
> downgrade, removing 0x200
> wake one waiter on 0x200
> insert PTE @ 0x202
>   wake up, grab index 0x203
>   ...
>   wake one waiter on index 0x203
> 
>   ... sleeps forever
> Right?
 
Exactly.

> > > @@ -608,22 +683,28 @@ static void *dax_insert_mapping_entry(struct 
> > > address_space *mapping,
> > >   error = radix_tree_preload(vmf->gfp_mask & ~__GFP_HIGHMEM);
> > >   if (error)
> > >   return ERR_PTR(error);
> > > + } else if (((unsigned long)entry & RADIX_DAX_HZP) &&
> > > + !(flags & RADIX_DAX_HZP)) {
> > > + /* replacing huge zero page with PMD block mapping */
> > > + unmap_mapping_range(mapping,
> > > + (vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > >   }
> > >  
> > >   spin_lock_irq(>tree_lock);
> > > - new_entry = (void *)((unsigned long)RADIX_DAX_ENTRY(sector, false) |
> > > -RADIX_DAX_ENTRY_LOCK);
> > > + new_entry = dax_radix_entry(sector, flags);
> > > +
> > 
> > You've lost the RADIX_DAX_ENTRY_LOCK flag here?
> 
> Oh, nope, that's embedded in the dax_radix_entry() helper:
> 
> /* entries begin locked */
> static inline void *dax_radix_entry(sector_t sector, unsigned long flags)
> {
>   return (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY | flags |
>   ((unsigned long)sector << RADIX_DAX_SHIFT) |
>   RADIX_DAX_ENTRY_LOCK);
> }
> 
> I'll s/dax_radix_entry/dax_radix_locked_entry/ or something to make this
> clearer to the reader.

Yep, that would be better. Thanks!

> > >   if 

Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-11 Thread Ross Zwisler
On Tue, Oct 11, 2016 at 10:31:52AM +0200, Jan Kara wrote:
> On Fri 07-10-16 15:09:02, Ross Zwisler wrote:
> > diff --git a/fs/dax.c b/fs/dax.c
> > index ac3cd05..e51d51f 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -281,7 +281,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct 
> > address_space *mapping,
> >  * queue to the start of that PMD.  This ensures that all offsets in
> >  * the range covered by the PMD map to the same bit lock.
> >  */
> > -   if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > +   if ((unsigned long)entry & RADIX_DAX_PMD)
> > index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);
> 
> I agree with Christoph - helper for masking type bits would make this
> nicer.

Fixed via a dax_flag_test() helper as I outlined in the mail to Christoph.  It
seems clean to me, but if you or Christoph feel strongly that it would be
cleaner as a local 'flags' variable, I'll make the change.

> ...
> > -static void *grab_mapping_entry(struct address_space *mapping, pgoff_t 
> > index)
> > +static void *grab_mapping_entry(struct address_space *mapping, pgoff_t 
> > index,
> > +   unsigned long size_flag)
> >  {
> > +   bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
> > void *entry, **slot;
> >  
> >  restart:
> > spin_lock_irq(>tree_lock);
> > entry = get_unlocked_mapping_entry(mapping, index, );
> > +
> > +   if (entry) {
> > +   if (size_flag & RADIX_DAX_PMD) {
> > +   if (!radix_tree_exceptional_entry(entry) ||
> > +   !((unsigned long)entry & RADIX_DAX_PMD)) {
> > +   entry = ERR_PTR(-EEXIST);
> > +   goto out_unlock;
> 
> You need to call put_unlocked_mapping_entry() if you use
> get_unlocked_mapping_entry() and then decide not to lock it. The reason is
> that the waitqueues we use are exclusive (we wake up only a single waiter
> waiting for the lock) and so there can be some waiters for the entry lock
> although we have not locked the entry ourselves.

Oh, makes sense, thanks for the catch.

> > +   }
> > +   } else { /* trying to grab a PTE entry */
> > +   if (radix_tree_exceptional_entry(entry) &&
> > +   ((unsigned long)entry & RADIX_DAX_PMD) &&
> > +   ((unsigned long)entry &
> > +(RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {
> > +   pmd_downgrade = true;
> > +   }
> > +   }
> > +   }
> > +
> > /* No entry for given index? Make sure radix tree is big enough. */
> > -   if (!entry) {
> > +   if (!entry || pmd_downgrade) {
> > int err;
> >  
> > +   if (pmd_downgrade) {
> > +   /*
> > +* Make sure 'entry' remains valid while we drop
> > +* mapping->tree_lock.
> > +*/
> > +   entry = lock_slot(mapping, slot);
> > +   }
> > +
> > spin_unlock_irq(>tree_lock);
> > err = radix_tree_preload(
> > mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM);
> > if (err)
> > return ERR_PTR(err);
> 
> You need to unlock the locked entry before you return here...

Yep, thanks, fixed.

> > -   entry = (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY |
> > -  RADIX_DAX_ENTRY_LOCK);
> > +
> > +   /*
> > +* Besides huge zero pages the only other thing that gets
> > +* downgraded are empty entries which don't need to be
> > +* unmapped.
> > +*/
> > +   if (pmd_downgrade && ((unsigned long)entry & RADIX_DAX_HZP))
> > +   unmap_mapping_range(mapping,
> > +   (index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > +
> > spin_lock_irq(>tree_lock);
> > -   err = radix_tree_insert(>page_tree, index, entry);
> > +
> > +   if (pmd_downgrade) {
> > +   radix_tree_delete(>page_tree, index);
> > +   mapping->nrexceptional--;
> > +   dax_wake_mapping_entry_waiter(mapping, index, entry,
> > +   false);
> 
> You need to set 'wake_all' argument here to true. Otherwise there could be
> waiters waiting for non-existent entry forever...

Interesting.   Fixed, but let me make sure I understand.  So is the issue that
you could have say 2 tasks waiting on a PMD index that has been rounded down
to the PMD index via dax_entry_waitqueue()?

The person holding the lock on the entry would remove the PMD, insert a PTE
and wake just one of the PMD aligned waiters.  That waiter would wake up, do
something PTE based (since the PMD space is now polluted with PTEs), and then
wake any waiters on it's PTE index.  Meanwhile, the second waiter could sleep
forever on the PMD aligned index.  Is this correct?

So, perhaps more 

Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-11 Thread Ross Zwisler
On Tue, Oct 11, 2016 at 10:31:52AM +0200, Jan Kara wrote:
> On Fri 07-10-16 15:09:02, Ross Zwisler wrote:
> > diff --git a/fs/dax.c b/fs/dax.c
> > index ac3cd05..e51d51f 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -281,7 +281,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct 
> > address_space *mapping,
> >  * queue to the start of that PMD.  This ensures that all offsets in
> >  * the range covered by the PMD map to the same bit lock.
> >  */
> > -   if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > +   if ((unsigned long)entry & RADIX_DAX_PMD)
> > index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);
> 
> I agree with Christoph - helper for masking type bits would make this
> nicer.

Fixed via a dax_flag_test() helper as I outlined in the mail to Christoph.  It
seems clean to me, but if you or Christoph feel strongly that it would be
cleaner as a local 'flags' variable, I'll make the change.

> ...
> > -static void *grab_mapping_entry(struct address_space *mapping, pgoff_t 
> > index)
> > +static void *grab_mapping_entry(struct address_space *mapping, pgoff_t 
> > index,
> > +   unsigned long size_flag)
> >  {
> > +   bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
> > void *entry, **slot;
> >  
> >  restart:
> > spin_lock_irq(>tree_lock);
> > entry = get_unlocked_mapping_entry(mapping, index, );
> > +
> > +   if (entry) {
> > +   if (size_flag & RADIX_DAX_PMD) {
> > +   if (!radix_tree_exceptional_entry(entry) ||
> > +   !((unsigned long)entry & RADIX_DAX_PMD)) {
> > +   entry = ERR_PTR(-EEXIST);
> > +   goto out_unlock;
> 
> You need to call put_unlocked_mapping_entry() if you use
> get_unlocked_mapping_entry() and then decide not to lock it. The reason is
> that the waitqueues we use are exclusive (we wake up only a single waiter
> waiting for the lock) and so there can be some waiters for the entry lock
> although we have not locked the entry ourselves.

Oh, makes sense, thanks for the catch.

> > +   }
> > +   } else { /* trying to grab a PTE entry */
> > +   if (radix_tree_exceptional_entry(entry) &&
> > +   ((unsigned long)entry & RADIX_DAX_PMD) &&
> > +   ((unsigned long)entry &
> > +(RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {
> > +   pmd_downgrade = true;
> > +   }
> > +   }
> > +   }
> > +
> > /* No entry for given index? Make sure radix tree is big enough. */
> > -   if (!entry) {
> > +   if (!entry || pmd_downgrade) {
> > int err;
> >  
> > +   if (pmd_downgrade) {
> > +   /*
> > +* Make sure 'entry' remains valid while we drop
> > +* mapping->tree_lock.
> > +*/
> > +   entry = lock_slot(mapping, slot);
> > +   }
> > +
> > spin_unlock_irq(>tree_lock);
> > err = radix_tree_preload(
> > mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM);
> > if (err)
> > return ERR_PTR(err);
> 
> You need to unlock the locked entry before you return here...

Yep, thanks, fixed.

> > -   entry = (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY |
> > -  RADIX_DAX_ENTRY_LOCK);
> > +
> > +   /*
> > +* Besides huge zero pages the only other thing that gets
> > +* downgraded are empty entries which don't need to be
> > +* unmapped.
> > +*/
> > +   if (pmd_downgrade && ((unsigned long)entry & RADIX_DAX_HZP))
> > +   unmap_mapping_range(mapping,
> > +   (index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> > +
> > spin_lock_irq(>tree_lock);
> > -   err = radix_tree_insert(>page_tree, index, entry);
> > +
> > +   if (pmd_downgrade) {
> > +   radix_tree_delete(>page_tree, index);
> > +   mapping->nrexceptional--;
> > +   dax_wake_mapping_entry_waiter(mapping, index, entry,
> > +   false);
> 
> You need to set 'wake_all' argument here to true. Otherwise there could be
> waiters waiting for non-existent entry forever...

Interesting.   Fixed, but let me make sure I understand.  So is the issue that
you could have say 2 tasks waiting on a PMD index that has been rounded down
to the PMD index via dax_entry_waitqueue()?

The person holding the lock on the entry would remove the PMD, insert a PTE
and wake just one of the PMD aligned waiters.  That waiter would wake up, do
something PTE based (since the PMD space is now polluted with PTEs), and then
wake any waiters on it's PTE index.  Meanwhile, the second waiter could sleep
forever on the PMD aligned index.  Is this correct?

So, perhaps more 

Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-11 Thread Ross Zwisler
On Mon, Oct 10, 2016 at 05:59:17PM +0200, Christoph Hellwig wrote:
> On Fri, Oct 07, 2016 at 03:09:02PM -0600, Ross Zwisler wrote:
> > -   if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > +   if ((unsigned long)entry & RADIX_DAX_PMD)
> 
> Please introduce a proper inline helper that mask all the possible type
> bits out of the radix tree entry, and use them wherever you do the
> open cast.

Yea, this is messy.  I tried having temporary flags, but that's basically
where we came from with the old 'type' thing which used to be
RADIX_DAX_PTE|RADIX_DAX_PMD.  

After playing with it a bit it seems like the cleanest way is to have a little
flag test helper, like this:

static int dax_flag_test(void *entry, int flags)
{
return (unsigned long)entry & flags;
}

...

/*
 * Besides huge zero pages the only other thing that gets
 * downgraded are empty entries which don't need to be
 * unmapped.
 */
if (pmd_downgrade && dax_flag_test(entry, RADIX_DAX_HZP))
unmap_mapping_range(mapping,
(index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);

etc.   Please let me know if this is undesirable for some reason.  Vs keeping
the flags in a local variable, this is good because a) it doesn't require
callers to cast, and b) it makes operator precedence easy because the flags
are a param, so no "flags & (flag1|flag2)" nesting, and c) we don't keep a
local variable that could get out of sync with 'entry'.

> >  restart:
> > spin_lock_irq(>tree_lock);
> > entry = get_unlocked_mapping_entry(mapping, index, );
> > +
> > +   if (entry) {
> > +   if (size_flag & RADIX_DAX_PMD) {
> > +   if (!radix_tree_exceptional_entry(entry) ||
> > +   !((unsigned long)entry & RADIX_DAX_PMD)) {
> > +   entry = ERR_PTR(-EEXIST);
> > +   goto out_unlock;
> > +   }
> > +   } else { /* trying to grab a PTE entry */
> > +   if (radix_tree_exceptional_entry(entry) &&
> > +   ((unsigned long)entry & RADIX_DAX_PMD) &&
> > +   ((unsigned long)entry &
> > +(RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {
> 
> And when we do these cases N times next to each other we should
> have a local variable the valid flag bits of entry.


Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-11 Thread Ross Zwisler
On Mon, Oct 10, 2016 at 05:59:17PM +0200, Christoph Hellwig wrote:
> On Fri, Oct 07, 2016 at 03:09:02PM -0600, Ross Zwisler wrote:
> > -   if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > +   if ((unsigned long)entry & RADIX_DAX_PMD)
> 
> Please introduce a proper inline helper that mask all the possible type
> bits out of the radix tree entry, and use them wherever you do the
> open cast.

Yea, this is messy.  I tried having temporary flags, but that's basically
where we came from with the old 'type' thing which used to be
RADIX_DAX_PTE|RADIX_DAX_PMD.  

After playing with it a bit it seems like the cleanest way is to have a little
flag test helper, like this:

static int dax_flag_test(void *entry, int flags)
{
return (unsigned long)entry & flags;
}

...

/*
 * Besides huge zero pages the only other thing that gets
 * downgraded are empty entries which don't need to be
 * unmapped.
 */
if (pmd_downgrade && dax_flag_test(entry, RADIX_DAX_HZP))
unmap_mapping_range(mapping,
(index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);

etc.   Please let me know if this is undesirable for some reason.  Vs keeping
the flags in a local variable, this is good because a) it doesn't require
callers to cast, and b) it makes operator precedence easy because the flags
are a param, so no "flags & (flag1|flag2)" nesting, and c) we don't keep a
local variable that could get out of sync with 'entry'.

> >  restart:
> > spin_lock_irq(>tree_lock);
> > entry = get_unlocked_mapping_entry(mapping, index, );
> > +
> > +   if (entry) {
> > +   if (size_flag & RADIX_DAX_PMD) {
> > +   if (!radix_tree_exceptional_entry(entry) ||
> > +   !((unsigned long)entry & RADIX_DAX_PMD)) {
> > +   entry = ERR_PTR(-EEXIST);
> > +   goto out_unlock;
> > +   }
> > +   } else { /* trying to grab a PTE entry */
> > +   if (radix_tree_exceptional_entry(entry) &&
> > +   ((unsigned long)entry & RADIX_DAX_PMD) &&
> > +   ((unsigned long)entry &
> > +(RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {
> 
> And when we do these cases N times next to each other we should
> have a local variable the valid flag bits of entry.


Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-11 Thread Jan Kara
On Fri 07-10-16 15:09:02, Ross Zwisler wrote:
> diff --git a/fs/dax.c b/fs/dax.c
> index ac3cd05..e51d51f 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -281,7 +281,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct 
> address_space *mapping,
>* queue to the start of that PMD.  This ensures that all offsets in
>* the range covered by the PMD map to the same bit lock.
>*/
> - if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> + if ((unsigned long)entry & RADIX_DAX_PMD)
>   index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);

I agree with Christoph - helper for masking type bits would make this
nicer.

...
> -static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
> +static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
> + unsigned long size_flag)
>  {
> + bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
>   void *entry, **slot;
>  
>  restart:
>   spin_lock_irq(>tree_lock);
>   entry = get_unlocked_mapping_entry(mapping, index, );
> +
> + if (entry) {
> + if (size_flag & RADIX_DAX_PMD) {
> + if (!radix_tree_exceptional_entry(entry) ||
> + !((unsigned long)entry & RADIX_DAX_PMD)) {
> + entry = ERR_PTR(-EEXIST);
> + goto out_unlock;

You need to call put_unlocked_mapping_entry() if you use
get_unlocked_mapping_entry() and then decide not to lock it. The reason is
that the waitqueues we use are exclusive (we wake up only a single waiter
waiting for the lock) and so there can be some waiters for the entry lock
although we have not locked the entry ourselves.

> + }
> + } else { /* trying to grab a PTE entry */
> + if (radix_tree_exceptional_entry(entry) &&
> + ((unsigned long)entry & RADIX_DAX_PMD) &&
> + ((unsigned long)entry &
> +  (RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {
> + pmd_downgrade = true;
> + }
> + }
> + }
> +
>   /* No entry for given index? Make sure radix tree is big enough. */
> - if (!entry) {
> + if (!entry || pmd_downgrade) {
>   int err;
>  
> + if (pmd_downgrade) {
> + /*
> +  * Make sure 'entry' remains valid while we drop
> +  * mapping->tree_lock.
> +  */
> + entry = lock_slot(mapping, slot);
> + }
> +
>   spin_unlock_irq(>tree_lock);
>   err = radix_tree_preload(
>   mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM);
>   if (err)
>   return ERR_PTR(err);

You need to unlock the locked entry before you return here...

> - entry = (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY |
> -RADIX_DAX_ENTRY_LOCK);
> +
> + /*
> +  * Besides huge zero pages the only other thing that gets
> +  * downgraded are empty entries which don't need to be
> +  * unmapped.
> +  */
> + if (pmd_downgrade && ((unsigned long)entry & RADIX_DAX_HZP))
> + unmap_mapping_range(mapping,
> + (index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> +
>   spin_lock_irq(>tree_lock);
> - err = radix_tree_insert(>page_tree, index, entry);
> +
> + if (pmd_downgrade) {
> + radix_tree_delete(>page_tree, index);
> + mapping->nrexceptional--;
> + dax_wake_mapping_entry_waiter(mapping, index, entry,
> + false);

You need to set 'wake_all' argument here to true. Otherwise there could be
waiters waiting for non-existent entry forever...

> + }
> +
> + entry = dax_radix_entry(0, size_flag | RADIX_DAX_EMPTY);
> +
> + err = __radix_tree_insert(>page_tree, index,
> + dax_radix_order(entry), entry);
>   radix_tree_preload_end();
>   if (err) {
>   spin_unlock_irq(>tree_lock);
> - /* Someone already created the entry? */
> - if (err == -EEXIST)
> + /*
> +  * Someone already created the entry?  This is a
> +  * normal failure when inserting PMDs in a range
> +  * that already contains PTEs.  In that case we want
> +  * to return -EEXIST immediately.
> +  */
> + if (err == -EEXIST && !(size_flag & RADIX_DAX_PMD))
>   goto restart;

Add a comment here that we can get here only if there was no radix tree
entry 

Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-11 Thread Jan Kara
On Fri 07-10-16 15:09:02, Ross Zwisler wrote:
> diff --git a/fs/dax.c b/fs/dax.c
> index ac3cd05..e51d51f 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -281,7 +281,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct 
> address_space *mapping,
>* queue to the start of that PMD.  This ensures that all offsets in
>* the range covered by the PMD map to the same bit lock.
>*/
> - if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> + if ((unsigned long)entry & RADIX_DAX_PMD)
>   index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);

I agree with Christoph - helper for masking type bits would make this
nicer.

...
> -static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
> +static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
> + unsigned long size_flag)
>  {
> + bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
>   void *entry, **slot;
>  
>  restart:
>   spin_lock_irq(>tree_lock);
>   entry = get_unlocked_mapping_entry(mapping, index, );
> +
> + if (entry) {
> + if (size_flag & RADIX_DAX_PMD) {
> + if (!radix_tree_exceptional_entry(entry) ||
> + !((unsigned long)entry & RADIX_DAX_PMD)) {
> + entry = ERR_PTR(-EEXIST);
> + goto out_unlock;

You need to call put_unlocked_mapping_entry() if you use
get_unlocked_mapping_entry() and then decide not to lock it. The reason is
that the waitqueues we use are exclusive (we wake up only a single waiter
waiting for the lock) and so there can be some waiters for the entry lock
although we have not locked the entry ourselves.

> + }
> + } else { /* trying to grab a PTE entry */
> + if (radix_tree_exceptional_entry(entry) &&
> + ((unsigned long)entry & RADIX_DAX_PMD) &&
> + ((unsigned long)entry &
> +  (RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {
> + pmd_downgrade = true;
> + }
> + }
> + }
> +
>   /* No entry for given index? Make sure radix tree is big enough. */
> - if (!entry) {
> + if (!entry || pmd_downgrade) {
>   int err;
>  
> + if (pmd_downgrade) {
> + /*
> +  * Make sure 'entry' remains valid while we drop
> +  * mapping->tree_lock.
> +  */
> + entry = lock_slot(mapping, slot);
> + }
> +
>   spin_unlock_irq(>tree_lock);
>   err = radix_tree_preload(
>   mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM);
>   if (err)
>   return ERR_PTR(err);

You need to unlock the locked entry before you return here...

> - entry = (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY |
> -RADIX_DAX_ENTRY_LOCK);
> +
> + /*
> +  * Besides huge zero pages the only other thing that gets
> +  * downgraded are empty entries which don't need to be
> +  * unmapped.
> +  */
> + if (pmd_downgrade && ((unsigned long)entry & RADIX_DAX_HZP))
> + unmap_mapping_range(mapping,
> + (index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
> +
>   spin_lock_irq(>tree_lock);
> - err = radix_tree_insert(>page_tree, index, entry);
> +
> + if (pmd_downgrade) {
> + radix_tree_delete(>page_tree, index);
> + mapping->nrexceptional--;
> + dax_wake_mapping_entry_waiter(mapping, index, entry,
> + false);

You need to set 'wake_all' argument here to true. Otherwise there could be
waiters waiting for non-existent entry forever...

> + }
> +
> + entry = dax_radix_entry(0, size_flag | RADIX_DAX_EMPTY);
> +
> + err = __radix_tree_insert(>page_tree, index,
> + dax_radix_order(entry), entry);
>   radix_tree_preload_end();
>   if (err) {
>   spin_unlock_irq(>tree_lock);
> - /* Someone already created the entry? */
> - if (err == -EEXIST)
> + /*
> +  * Someone already created the entry?  This is a
> +  * normal failure when inserting PMDs in a range
> +  * that already contains PTEs.  In that case we want
> +  * to return -EEXIST immediately.
> +  */
> + if (err == -EEXIST && !(size_flag & RADIX_DAX_PMD))
>   goto restart;

Add a comment here that we can get here only if there was no radix tree
entry 

Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-10 Thread Ross Zwisler
On Mon, Oct 10, 2016 at 05:59:17PM +0200, Christoph Hellwig wrote:
> On Fri, Oct 07, 2016 at 03:09:02PM -0600, Ross Zwisler wrote:
> > -   if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > +   if ((unsigned long)entry & RADIX_DAX_PMD)
> 
> Please introduce a proper inline helper that mask all the possible type
> bits out of the radix tree entry, and use them wherever you do the
> open cast.

Sure, will do.

> >  restart:
> > spin_lock_irq(>tree_lock);
> > entry = get_unlocked_mapping_entry(mapping, index, );
> > +
> > +   if (entry) {
> > +   if (size_flag & RADIX_DAX_PMD) {
> > +   if (!radix_tree_exceptional_entry(entry) ||
> > +   !((unsigned long)entry & RADIX_DAX_PMD)) {
> > +   entry = ERR_PTR(-EEXIST);
> > +   goto out_unlock;
> > +   }
> > +   } else { /* trying to grab a PTE entry */
> > +   if (radix_tree_exceptional_entry(entry) &&
> > +   ((unsigned long)entry & RADIX_DAX_PMD) &&
> > +   ((unsigned long)entry &
> > +(RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {
> 
> And when we do these cases N times next to each other we should
> have a local variable the valid flag bits of entry.

Okay, will fix.


Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-10 Thread Ross Zwisler
On Mon, Oct 10, 2016 at 05:59:17PM +0200, Christoph Hellwig wrote:
> On Fri, Oct 07, 2016 at 03:09:02PM -0600, Ross Zwisler wrote:
> > -   if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > +   if ((unsigned long)entry & RADIX_DAX_PMD)
> 
> Please introduce a proper inline helper that mask all the possible type
> bits out of the radix tree entry, and use them wherever you do the
> open cast.

Sure, will do.

> >  restart:
> > spin_lock_irq(>tree_lock);
> > entry = get_unlocked_mapping_entry(mapping, index, );
> > +
> > +   if (entry) {
> > +   if (size_flag & RADIX_DAX_PMD) {
> > +   if (!radix_tree_exceptional_entry(entry) ||
> > +   !((unsigned long)entry & RADIX_DAX_PMD)) {
> > +   entry = ERR_PTR(-EEXIST);
> > +   goto out_unlock;
> > +   }
> > +   } else { /* trying to grab a PTE entry */
> > +   if (radix_tree_exceptional_entry(entry) &&
> > +   ((unsigned long)entry & RADIX_DAX_PMD) &&
> > +   ((unsigned long)entry &
> > +(RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {
> 
> And when we do these cases N times next to each other we should
> have a local variable the valid flag bits of entry.

Okay, will fix.


Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-10 Thread Christoph Hellwig
On Fri, Oct 07, 2016 at 03:09:02PM -0600, Ross Zwisler wrote:
> - if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> + if ((unsigned long)entry & RADIX_DAX_PMD)

Please introduce a proper inline helper that mask all the possible type
bits out of the radix tree entry, and use them wherever you do the
open cast.

>  restart:
>   spin_lock_irq(>tree_lock);
>   entry = get_unlocked_mapping_entry(mapping, index, );
> +
> + if (entry) {
> + if (size_flag & RADIX_DAX_PMD) {
> + if (!radix_tree_exceptional_entry(entry) ||
> + !((unsigned long)entry & RADIX_DAX_PMD)) {
> + entry = ERR_PTR(-EEXIST);
> + goto out_unlock;
> + }
> + } else { /* trying to grab a PTE entry */
> + if (radix_tree_exceptional_entry(entry) &&
> + ((unsigned long)entry & RADIX_DAX_PMD) &&
> + ((unsigned long)entry &
> +  (RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {

And when we do these cases N times next to each other we should
have a local variable the valid flag bits of entry.


Re: [PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-10 Thread Christoph Hellwig
On Fri, Oct 07, 2016 at 03:09:02PM -0600, Ross Zwisler wrote:
> - if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> + if ((unsigned long)entry & RADIX_DAX_PMD)

Please introduce a proper inline helper that mask all the possible type
bits out of the radix tree entry, and use them wherever you do the
open cast.

>  restart:
>   spin_lock_irq(>tree_lock);
>   entry = get_unlocked_mapping_entry(mapping, index, );
> +
> + if (entry) {
> + if (size_flag & RADIX_DAX_PMD) {
> + if (!radix_tree_exceptional_entry(entry) ||
> + !((unsigned long)entry & RADIX_DAX_PMD)) {
> + entry = ERR_PTR(-EEXIST);
> + goto out_unlock;
> + }
> + } else { /* trying to grab a PTE entry */
> + if (radix_tree_exceptional_entry(entry) &&
> + ((unsigned long)entry & RADIX_DAX_PMD) &&
> + ((unsigned long)entry &
> +  (RADIX_DAX_HZP|RADIX_DAX_EMPTY))) {

And when we do these cases N times next to each other we should
have a local variable the valid flag bits of entry.


[PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-07 Thread Ross Zwisler
DAX PMDs have been disabled since Jan Kara introduced DAX radix tree based
locking.  This patch allows DAX PMDs to participate in the DAX radix tree
based locking scheme so that they can be re-enabled using the new struct
iomap based fault handlers.

There are currently three types of DAX 4k entries: 4k zero pages, 4k DAX
mappings that have an associated block allocation, and 4k DAX empty
entries.  The empty entries exist to provide locking for the duration of a
given page fault.

This patch adds three equivalent 2MiB DAX entries: Huge Zero Page (HZP)
entries, PMD DAX entries that have associated block allocations, and 2 MiB
DAX empty entries.

Unlike the 4k case where we insert a struct page* into the radix tree for
4k zero pages, for HZP we insert a DAX exceptional entry with the new
RADIX_DAX_HZP flag set.  This is because we use a single 2 MiB zero page in
every 2MiB hole mapping, and it doesn't make sense to have that same struct
page* with multiple entries in multiple trees.  This would cause contention
on the single page lock for the one Huge Zero Page, and it would break the
page->index and page->mapping associations that are assumed to be valid in
many other places in the kernel.

One difficult use case is when one thread is trying to use 4k entries in
radix tree for a given offset, and another thread is using 2 MiB entries
for that same offset.  The current code handles this by making the 2 MiB
user fall back to 4k entries for most cases.  This was done because it is
the simplest solution, and because the use of 2MiB pages is already
opportunistic.

If we were to try to upgrade from 4k pages to 2MiB pages for a given range,
we run into the problem of how we lock out 4k page faults for the entire
2MiB range while we clean out the radix tree so we can insert the 2MiB
entry.  We can solve this problem if we need to, but I think that the cases
where both 2MiB entries and 4K entries are being used for the same range
will be rare enough and the gain small enough that it probably won't be
worth the complexity.

Signed-off-by: Ross Zwisler 
---
 fs/dax.c| 325 ++--
 include/linux/dax.h |  57 +++--
 mm/filemap.c|   4 +-
 3 files changed, 339 insertions(+), 47 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index ac3cd05..e51d51f 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -281,7 +281,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct 
address_space *mapping,
 * queue to the start of that PMD.  This ensures that all offsets in
 * the range covered by the PMD map to the same bit lock.
 */
-   if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
+   if ((unsigned long)entry & RADIX_DAX_PMD)
index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);
 
key->mapping = mapping;
@@ -388,35 +388,103 @@ static void *get_unlocked_mapping_entry(struct 
address_space *mapping,
  * radix tree entry locked. If the radix tree doesn't contain given index,
  * create empty exceptional entry for the index and return with it locked.
  *
+ * When requesting an entry with size RADIX_DAX_PMD, grab_mapping_entry() will
+ * either return that locked entry or will return an error.  This error will
+ * happen if there are any 4k entries (either zero pages or DAX entries)
+ * within the 2MiB range that we are requesting.
+ *
+ * We always favor 4k entries over 2MiB entries. There isn't a flow where we
+ * evict 4k entries in order to 'upgrade' them to a 2MiB entry.  A 2MiB
+ * insertion will fail if it finds any 4k entries already in the tree, and a
+ * 4k insertion will cause an existing 2MiB entry to be unmapped and
+ * downgraded to 4k entries.  This happens for both 2MiB huge zero pages as
+ * well as 2MiB empty entries.
+ *
+ * The exception to this downgrade path is for 2MiB DAX PMD entries that have
+ * real storage backing them.  We will leave these real 2MiB DAX entries in
+ * the tree, and PTE writes will simply dirty the entire 2MiB DAX entry.
+ *
  * Note: Unlike filemap_fault() we don't honor FAULT_FLAG_RETRY flags. For
  * persistent memory the benefit is doubtful. We can add that later if we can
  * show it helps.
  */
-static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
+static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
+   unsigned long size_flag)
 {
+   bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
void *entry, **slot;
 
 restart:
spin_lock_irq(>tree_lock);
entry = get_unlocked_mapping_entry(mapping, index, );
+
+   if (entry) {
+   if (size_flag & RADIX_DAX_PMD) {
+   if (!radix_tree_exceptional_entry(entry) ||
+   !((unsigned long)entry & RADIX_DAX_PMD)) {
+   entry = ERR_PTR(-EEXIST);
+   goto out_unlock;
+   }
+ 

[PATCH v5 15/17] dax: add struct iomap based DAX PMD support

2016-10-07 Thread Ross Zwisler
DAX PMDs have been disabled since Jan Kara introduced DAX radix tree based
locking.  This patch allows DAX PMDs to participate in the DAX radix tree
based locking scheme so that they can be re-enabled using the new struct
iomap based fault handlers.

There are currently three types of DAX 4k entries: 4k zero pages, 4k DAX
mappings that have an associated block allocation, and 4k DAX empty
entries.  The empty entries exist to provide locking for the duration of a
given page fault.

This patch adds three equivalent 2MiB DAX entries: Huge Zero Page (HZP)
entries, PMD DAX entries that have associated block allocations, and 2 MiB
DAX empty entries.

Unlike the 4k case where we insert a struct page* into the radix tree for
4k zero pages, for HZP we insert a DAX exceptional entry with the new
RADIX_DAX_HZP flag set.  This is because we use a single 2 MiB zero page in
every 2MiB hole mapping, and it doesn't make sense to have that same struct
page* with multiple entries in multiple trees.  This would cause contention
on the single page lock for the one Huge Zero Page, and it would break the
page->index and page->mapping associations that are assumed to be valid in
many other places in the kernel.

One difficult use case is when one thread is trying to use 4k entries in
radix tree for a given offset, and another thread is using 2 MiB entries
for that same offset.  The current code handles this by making the 2 MiB
user fall back to 4k entries for most cases.  This was done because it is
the simplest solution, and because the use of 2MiB pages is already
opportunistic.

If we were to try to upgrade from 4k pages to 2MiB pages for a given range,
we run into the problem of how we lock out 4k page faults for the entire
2MiB range while we clean out the radix tree so we can insert the 2MiB
entry.  We can solve this problem if we need to, but I think that the cases
where both 2MiB entries and 4K entries are being used for the same range
will be rare enough and the gain small enough that it probably won't be
worth the complexity.

Signed-off-by: Ross Zwisler 
---
 fs/dax.c| 325 ++--
 include/linux/dax.h |  57 +++--
 mm/filemap.c|   4 +-
 3 files changed, 339 insertions(+), 47 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index ac3cd05..e51d51f 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -281,7 +281,7 @@ static wait_queue_head_t *dax_entry_waitqueue(struct 
address_space *mapping,
 * queue to the start of that PMD.  This ensures that all offsets in
 * the range covered by the PMD map to the same bit lock.
 */
-   if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
+   if ((unsigned long)entry & RADIX_DAX_PMD)
index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);
 
key->mapping = mapping;
@@ -388,35 +388,103 @@ static void *get_unlocked_mapping_entry(struct 
address_space *mapping,
  * radix tree entry locked. If the radix tree doesn't contain given index,
  * create empty exceptional entry for the index and return with it locked.
  *
+ * When requesting an entry with size RADIX_DAX_PMD, grab_mapping_entry() will
+ * either return that locked entry or will return an error.  This error will
+ * happen if there are any 4k entries (either zero pages or DAX entries)
+ * within the 2MiB range that we are requesting.
+ *
+ * We always favor 4k entries over 2MiB entries. There isn't a flow where we
+ * evict 4k entries in order to 'upgrade' them to a 2MiB entry.  A 2MiB
+ * insertion will fail if it finds any 4k entries already in the tree, and a
+ * 4k insertion will cause an existing 2MiB entry to be unmapped and
+ * downgraded to 4k entries.  This happens for both 2MiB huge zero pages as
+ * well as 2MiB empty entries.
+ *
+ * The exception to this downgrade path is for 2MiB DAX PMD entries that have
+ * real storage backing them.  We will leave these real 2MiB DAX entries in
+ * the tree, and PTE writes will simply dirty the entire 2MiB DAX entry.
+ *
  * Note: Unlike filemap_fault() we don't honor FAULT_FLAG_RETRY flags. For
  * persistent memory the benefit is doubtful. We can add that later if we can
  * show it helps.
  */
-static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
+static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
+   unsigned long size_flag)
 {
+   bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
void *entry, **slot;
 
 restart:
spin_lock_irq(>tree_lock);
entry = get_unlocked_mapping_entry(mapping, index, );
+
+   if (entry) {
+   if (size_flag & RADIX_DAX_PMD) {
+   if (!radix_tree_exceptional_entry(entry) ||
+   !((unsigned long)entry & RADIX_DAX_PMD)) {
+   entry = ERR_PTR(-EEXIST);
+   goto out_unlock;
+   }
+   } else { /* trying to