Recently lockless_dereference() was added which can be used in place of
hard-coding smp_read_barrier_depends(). The following PATCH makes the change.

Signed-off-by: Pranith Kumar <[email protected]>
---
 mm/ksm.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index d247efa..a67de79 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -542,15 +542,14 @@ static struct page *get_ksm_page(struct stable_node 
*stable_node, bool lock_it)
        expected_mapping = (void *)stable_node +
                                (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM);
 again:
-       kpfn = ACCESS_ONCE(stable_node->kpfn);
-       page = pfn_to_page(kpfn);
-
        /*
         * page is computed from kpfn, so on most architectures reading
         * page->mapping is naturally ordered after reading node->kpfn,
         * but on Alpha we need to be more careful.
         */
-       smp_read_barrier_depends();
+       kpfn = lockless_dereference(stable_node->kpfn);
+       page = pfn_to_page(kpfn);
+
        if (ACCESS_ONCE(page->mapping) != expected_mapping)
                goto stale;
 
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to