On Thu, 2015-04-30 at 14:13 -0700, Jason Low wrote:
> On Thu, 2015-04-30 at 14:42 -0400, Waiman Long wrote:
> 
> > I do have a question of what kind of tearing you are talking about. Do 
> > you mean the tearing due to mm being changed in the middle of the 
> > access? The reason why I don't like this kind of construct is that I am 
> > not sure if
> > the address translation p->mm->numa_scan_seq is being done once or 
> > twice. I looked at the compiled code and the translation is done only once.
> > 
> > Anyway, the purpose of READ_ONCE and WRITE_ONCE is not for eliminating 
> > data tearing. They are to make sure that the compiler won't compile away 
> > data access and they are done in the order they appear in the program. I 
> > don't think it is a good idea to associate tearing elimination with 
> > those macros. So I would suggest removing the last sentence in your comment.
> 
> Yes, I can remove the last sentence in the comment since the main goal
> was to document that we're access this field without exclusive access.

---
Subject: [PATCH v3 2/5] sched, numa: Document usages of mm->numa_scan_seq

The p->mm->numa_scan_seq is accessed using READ_ONCE/WRITE_ONCE
and modified without exclusive access. It is not clear why it is
accessed this way. This patch provides some documentation on that.

Signed-off-by: Jason Low <jason.l...@hp.com>
Suggested-by: Ingo Molnar <mi...@kernel.org>
Acked-by: Rik van Riel <r...@redhat.com>
---
 kernel/sched/fair.c |   13 +++++++++++++
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5a44371..65a9a1dc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1794,6 +1794,11 @@ static void task_numa_placement(struct task_struct *p)
        u64 runtime, period;
        spinlock_t *group_lock = NULL;
 
+       /*
+        * The p->mm->numa_scan_seq gets updated without
+        * exclusive access. Use READ_ONCE() here to ensure
+        * that the field is read in a single access.
+        */
        seq = READ_ONCE(p->mm->numa_scan_seq);
        if (p->numa_scan_seq == seq)
                return;
@@ -2107,6 +2112,14 @@ void task_numa_fault(int last_cpupid, int mem_node, int 
pages, int flags)
 
 static void reset_ptenuma_scan(struct task_struct *p)
 {
+       /*
+        * We only did a read acquisition of the mmap sem, so
+        * p->mm->numa_scan_seq is written to without exclusive access
+        * and the update is not guaranteed to be atomic. That's not
+        * much of an issue though, since this is just used for
+        * statistical sampling. Use READ_ONCE/WRITE_ONCE, which are not
+        * expensive, to avoid any form of compiler optimizations.
+        */
        WRITE_ONCE(p->mm->numa_scan_seq, READ_ONCE(p->mm->numa_scan_seq) + 1);
        p->mm->numa_scan_offset = 0;
 }
-- 
1.7.2.5



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to