The invalidate page callback use to happen outside the page table spinlock
and thus callback use to be allow to sleep. This is no longer the case.
However now all call to mmu_notifier_invalidate_page() are bracketed by
call to mmu_notifier_invalidate_range_start/mmu_notifier_invalidate_range_end

Signed-off-by: Jérôme Glisse <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Bernhard Held <[email protected]>
Cc: Adam Borowski <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Radim Krčmář <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Takashi Iwai <[email protected]>
Cc: Nadav Amit <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: axie <[email protected]>
Cc: Andrew Morton <[email protected]>
---
 include/linux/mmu_notifier.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index c91b3bcd158f..acc72167b9cb 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -100,6 +100,12 @@ struct mmu_notifier_ops {
         * pte because the page hasn't been freed yet and it won't be
         * freed until this returns. If required set_page_dirty has to
         * be called internally to this method.
+        *
+        * Note that previously this callback wasn't call from under
+        * a spinlock and thus you were able to sleep inside it. This
+        * is no longer the case. However now all call to this callback
+        * is either bracketed by call to range_start()/range_end() or
+        * follow by a call to invalidate_range().
         */
        void (*invalidate_page)(struct mmu_notifier *mn,
                                struct mm_struct *mm,
-- 
2.13.5

Reply via email to