[tip:sched/core] time/tick-broadcast: Fix tick_broadcast_offline() lockdep complaint

2019-07-25 Thread tip-bot for Paul E. McKenney
Commit-ID:  84ec3a0787086fcd25f284f59b3aa01fd6fc0a5d
Gitweb: https://git.kernel.org/tip/84ec3a0787086fcd25f284f59b3aa01fd6fc0a5d
Author: Paul E. McKenney 
AuthorDate: Tue, 25 Jun 2019 09:52:38 -0700
Committer:  Ingo Molnar 
CommitDate: Thu, 25 Jul 2019 15:51:53 +0200

time/tick-broadcast: Fix tick_broadcast_offline() lockdep complaint

time/tick-broadcast: Fix tick_broadcast_offline() lockdep complaint

The TASKS03 and TREE04 rcutorture scenarios produce the following
lockdep complaint:

WARNING: inconsistent lock state
5.2.0-rc1+ #513 Not tainted

inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
migration/1/14 [HC0[0]:SC0[0]:HE1:SE1] takes:
(ptrval) (tick_broadcast_lock){?...}, at: 
tick_broadcast_offline+0xf/0x70
{IN-HARDIRQ-W} state was registered at:
  lock_acquire+0xb0/0x1c0
  _raw_spin_lock_irqsave+0x3c/0x50
  tick_broadcast_switch_to_oneshot+0xd/0x40
  tick_switch_to_oneshot+0x4f/0xd0
  hrtimer_run_queues+0xf3/0x130
  run_local_timers+0x1c/0x50
  update_process_times+0x1c/0x50
  tick_periodic+0x26/0xc0
  tick_handle_periodic+0x1a/0x60
  smp_apic_timer_interrupt+0x80/0x2a0
  apic_timer_interrupt+0xf/0x20
  _raw_spin_unlock_irqrestore+0x4e/0x60
  rcu_nocb_gp_kthread+0x15d/0x590
  kthread+0xf3/0x130
  ret_from_fork+0x3a/0x50
irq event stamp: 171
hardirqs last  enabled at (171): [] 
trace_hardirqs_on_thunk+0x1a/0x1c
hardirqs last disabled at (170): [] 
trace_hardirqs_off_thunk+0x1a/0x1c
softirqs last  enabled at (0): [] 
copy_process.part.56+0x650/0x1cb0
softirqs last disabled at (0): [<>] 0x0

[...]

To reproduce, run the following rcutorture test:

 $ tools/testing/selftests/rcutorture/bin/kvm.sh --duration 5 --kconfig 
"CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y" --configs "TASKS03 TREE04"

It turns out that tick_broadcast_offline() was an innocent bystander.
After all, interrupts are supposed to be disabled throughout
take_cpu_down(), and therefore should have been disabled upon entry to
tick_offline_cpu() and thus to tick_broadcast_offline().  This suggests
that one of the CPU-hotplug notifiers was incorrectly enabling interrupts,
and leaving them enabled on return.

Some debugging code showed that the culprit was sched_cpu_dying().
It had irqs enabled after return from sched_tick_stop().  Which in turn
had irqs enabled after return from cancel_delayed_work_sync().  Which is a
wrapper around __cancel_work_timer().  Which can sleep in the case where
something else is concurrently trying to cancel the same delayed work,
and as Thomas Gleixner pointed out on IRC, sleeping is a decidedly bad
idea when you are invoked from take_cpu_down(), regardless of the state
you leave interrupts in upon return.

Code inspection located no reason why the delayed work absolutely
needed to be canceled from sched_tick_stop():  The work is not
bound to the outgoing CPU by design, given that the whole point is
to collect statistics without disturbing the outgoing CPU.

This commit therefore simply drops the cancel_delayed_work_sync() from
sched_tick_stop().  Instead, a new ->state field is added to the tick_work
structure so that the delayed-work handler function sched_tick_remote()
can avoid reposting itself.  A cpu_is_offline() check is also added to
sched_tick_remote() to avoid mucking with the state of an offlined CPU
(though it does appear safe to do so).  The sched_tick_start() and
sched_tick_stop() functions also update ->state, and sched_tick_start()
also schedules the delayed work if ->state indicates that it is not
already in flight.

Signed-off-by: Paul E. McKenney 
[ paulmck: Apply Peter Zijlstra and Frederic Weisbecker atomics feedback. ]
Signed-off-by: Peter Zijlstra (Intel) 
Reviewed-by: Frederic Weisbecker 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: https://lkml.kernel.org/r/20190625165238.gj26...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 kernel/sched/core.c | 57 +
 1 file changed, 49 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2b037f195473..0b22e55cebe8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3486,8 +3486,36 @@ void scheduler_tick(void)
 
 struct tick_work {
int cpu;
+   atomic_tstate;
struct delayed_work work;
 };
+/* Values for ->state, see diagram below. */
+#define TICK_SCHED_REMOTE_OFFLINE  0
+#define TICK_SCHED_REMOTE_OFFLINING1
+#define TICK_SCHED_REMOTE_RUNNING  2
+
+/*
+ * State diagram for ->state:
+ *
+ *
+ *  TICK_SCHED_REMOTE_OFFLINE
+ *|   ^
+ *|   |
+ *|   | sched_tick_remote()
+ *|   |
+ * 

[tip:locking/core] tools/memory-model: Add scripts to check github litmus tests

2019-01-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  b02eb5b0961a06561b89f5b7f0dd171b750e5789
Gitweb: https://git.kernel.org/tip/b02eb5b0961a06561b89f5b7f0dd171b750e5789
Author: Paul E. McKenney 
AuthorDate: Mon, 3 Dec 2018 15:04:50 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 21 Jan 2019 11:06:59 +0100

tools/memory-model: Add scripts to check github litmus tests

The https://github.com/paulmckrcu/litmus repository contains a large
number of C-language litmus tests that include "Result:" comments
predicting the verification result.  This commit adds a number of scripts
that run tests on these litmus tests:

checkghlitmus.sh:
Runs all litmus tests in the https://github.com/paulmckrcu/litmus
archive that are C-language and that have "Result:" comment lines
documenting expected results, comparing the actual results to
those expected.  Clones the repository if it has not already
been cloned into the "tools/memory-model/litmus" directory.

initlitmushist.sh
Run all litmus tests having no more than the specified number
of processes given a specified timeout, recording the results in
.litmus.out files.  Clones the repository if it has not already
been cloned into the "tools/memory-model/litmus" directory.

newlitmushist.sh
For all new or updated litmus tests having no more than the
specified number of processes given a specified timeout, run
and record the results in .litmus.out files.

checklitmushist.sh
Run all litmus tests having .litmus.out files from previous
initlitmushist.sh or newlitmushist.sh runs, comparing the
herd output to that of the original runs.

The above scripts will run litmus tests concurrently, by default with
one job per available CPU.  Giving any of these scripts the --help
argument will cause them to print usage information.

This commit also adds a number of helper scripts that are not intended
to be invoked from the command line:

cmplitmushist.sh: Compare the output of two different runs of the same
litmus test.

judgelitmus.sh: Compare the output of a litmus test to its "Result:"
comment line.

parseargs.sh: Parse command-line arguments.

runlitmushist.sh: Run the litmus tests whose pathnames are provided one
per line on standard input.

While in the area, this commit also makes the existing checklitmus.sh
and checkalllitmus.sh scripts use parseargs.sh in order to provide a
bit of uniformity.  In addition, per-litmus-test status output is directed
to stdout, while end-of-test summary information is directed to stderr.
Finally, the error flag standardizes on "!!!" to assist those familiar
with rcutorture output.

The defaults for the parseargs.sh arguments may be overridden by using
environment variables: LKMM_DESTDIR for --destdir, LKMM_HERD_OPTIONS
for --herdoptions, LKMM_JOBS for --jobs, LKMM_PROCS for --procs, and
LKMM_TIMEOUT for --timeout.

[ paulmck: History-check summary-line changes per Alan Stern feedback. ]
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20181203230451.28921-2-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/.gitignore |   1 +
 tools/memory-model/README |   2 +
 tools/memory-model/scripts/README |  70 ++
 tools/memory-model/scripts/checkalllitmus.sh  |  53 +--
 tools/memory-model/scripts/checkghlitmus.sh   |  65 +
 tools/memory-model/scripts/checklitmus.sh |  74 +++
 tools/memory-model/scripts/checklitmushist.sh |  60 
 tools/memory-model/scripts/cmplitmushist.sh   |  87 ++
 tools/memory-model/scripts/initlitmushist.sh  |  68 ++
 tools/memory-model/scripts/judgelitmus.sh |  78 
 tools/memory-model/scripts/newlitmushist.sh   |  61 +
 tools/memory-model/scripts/parseargs.sh   | 126 ++
 tools/memory-model/scripts/runlitmushist.sh   |  87 ++
 13 files changed, 739 insertions(+), 93 deletions(-)

diff --git a/tools/memory-model/.gitignore b/tools/memory-model/.gitignore
new file mode 100644
index ..b1d34c52f3c3
--- /dev/null
+++ b/tools/memory-model/.gitignore
@@ -0,0 +1 @@
+litmus
diff --git a/tools/memory-model/README b/tools/memory-model/README
index acf9077cffaa..0f2c366518c6 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -156,6 +156,8 @@ lock.cat
 README
This file.
 
+scriptsVarious scripts, see scripts/README.
+
 
 ===
 LIMITATIONS
diff --git a/tools/memory-model/scripts/README 
b/tools/memory-model/scripts/README
new file 

[tip:locking/core] tools/memory-model: Make scripts take "-j" abbreviation for "--jobs"

2019-01-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  910cc9591d1433c2e26bd1c210844b09c699dd89
Gitweb: https://git.kernel.org/tip/910cc9591d1433c2e26bd1c210844b09c699dd89
Author: Paul E. McKenney 
AuthorDate: Mon, 3 Dec 2018 15:04:51 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 21 Jan 2019 11:07:04 +0100

tools/memory-model: Make scripts take "-j" abbreviation for "--jobs"

The "--jobs" argument to the litmus-test scripts is similar to the "-jN"
argument to "make", so this commit allows the "-jN" form as well.  While
in the area, it also prohibits the various forms of "-j0".

Suggested-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20181203230451.28921-3-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/scripts/parseargs.sh | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/scripts/parseargs.sh 
b/tools/memory-model/scripts/parseargs.sh
index 96b307c8d64a..859e1d581e05 100644
--- a/tools/memory-model/scripts/parseargs.sh
+++ b/tools/memory-model/scripts/parseargs.sh
@@ -95,8 +95,18 @@ do
LKMM_HERD_OPTIONS="$2"
shift
;;
-   --jobs|--job)
-   checkarg --jobs "(number)" "$#" "$2" '^[0-9]\+$' '^--'
+   -j[1-9]*)
+   njobs="`echo $1 | sed -e 's/^-j//'`"
+   trailchars="`echo $njobs | sed -e 's/[0-9]\+\(.*\)$/\1/'`"
+   if test -n "$trailchars"
+   then
+   echo $1 trailing characters "'$trailchars'"
+   usagehelp
+   fi
+   LKMM_JOBS="`echo $njobs | sed -e 's/^\([0-9]\+\).*$/\1/'`"
+   ;;
+   --jobs|--job|-j)
+   checkarg --jobs "(number)" "$#" "$2" '^[1-9][0-9]\+$' '^--'
LKMM_JOBS="$2"
shift
;;


[tip:timers/core] time: Move CONTEXT_TRACKING to kernel/time/Kconfig

2019-01-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  a4cffdad731447217701d3bc6da4587bbb4d2cbd
Gitweb: https://git.kernel.org/tip/a4cffdad731447217701d3bc6da4587bbb4d2cbd
Author: Paul E. McKenney 
AuthorDate: Thu, 20 Dec 2018 09:05:25 -0800
Committer:  Thomas Gleixner 
CommitDate: Tue, 15 Jan 2019 11:16:41 +0100

time: Move CONTEXT_TRACKING to kernel/time/Kconfig

Both CONTEXT_TRACKING and CONTEXT_TRACKING_FORCE are currently defined
in kernel/rcu/kconfig, which might have made sense at some point, but
no longer does given that RCU refers to neither of these Kconfig options.

Therefore move them to kernel/time/Kconfig, where the rest of the
NO_HZ_FULL Kconfig options live.

Signed-off-by: Paul E. McKenney 
Signed-off-by: Thomas Gleixner 
Cc: Frederic Weisbecker 
Link: https://lkml.kernel.org/r/20181220170525.ga12...@linux.ibm.com


---
 kernel/rcu/Kconfig  | 30 --
 kernel/time/Kconfig | 29 +
 2 files changed, 29 insertions(+), 30 deletions(-)

diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
index 939a2056c87a..37301430970e 100644
--- a/kernel/rcu/Kconfig
+++ b/kernel/rcu/Kconfig
@@ -87,36 +87,6 @@ config RCU_STALL_COMMON
 config RCU_NEED_SEGCBLIST
def_bool ( TREE_RCU || PREEMPT_RCU || TREE_SRCU )
 
-config CONTEXT_TRACKING
-   bool
-
-config CONTEXT_TRACKING_FORCE
-   bool "Force context tracking"
-   depends on CONTEXT_TRACKING
-   default y if !NO_HZ_FULL
-   help
- The major pre-requirement for full dynticks to work is to
- support the context tracking subsystem. But there are also
- other dependencies to provide in order to make the full
- dynticks working.
-
- This option stands for testing when an arch implements the
- context tracking backend but doesn't yet fullfill all the
- requirements to make the full dynticks feature working.
- Without the full dynticks, there is no way to test the support
- for context tracking and the subsystems that rely on it: RCU
- userspace extended quiescent state and tickless cputime
- accounting. This option copes with the absence of the full
- dynticks subsystem by forcing the context tracking on all
- CPUs in the system.
-
- Say Y only if you're working on the development of an
- architecture backend for the context tracking.
-
- Say N otherwise, this option brings an overhead that you
- don't want in production.
-
-
 config RCU_FANOUT
int "Tree-based hierarchical RCU fanout value"
range 2 64 if 64BIT
diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
index 58b981f4bb5d..e2c038d6c13c 100644
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -117,6 +117,35 @@ config NO_HZ_FULL
 
 endchoice
 
+config CONTEXT_TRACKING
+   bool
+
+config CONTEXT_TRACKING_FORCE
+   bool "Force context tracking"
+   depends on CONTEXT_TRACKING
+   default y if !NO_HZ_FULL
+   help
+ The major pre-requirement for full dynticks to work is to
+ support the context tracking subsystem. But there are also
+ other dependencies to provide in order to make the full
+ dynticks working.
+
+ This option stands for testing when an arch implements the
+ context tracking backend but doesn't yet fullfill all the
+ requirements to make the full dynticks feature working.
+ Without the full dynticks, there is no way to test the support
+ for context tracking and the subsystems that rely on it: RCU
+ userspace extended quiescent state and tickless cputime
+ accounting. This option copes with the absence of the full
+ dynticks subsystem by forcing the context tracking on all
+ CPUs in the system.
+
+ Say Y only if you're working on the development of an
+ architecture backend for the context tracking.
+
+ Say N otherwise, this option brings an overhead that you
+ don't want in production.
+
 config NO_HZ
bool "Old Idle dynticks config"
depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS


[tip:core/rcu] tools/kernel.h: Replace synchronize_sched() with synchronize_rcu()

2018-12-04 Thread tip-bot for Paul E. McKenney
Commit-ID:  4a67e3a79e3bdc47dfd0c85a1888067d95a0282c
Gitweb: https://git.kernel.org/tip/4a67e3a79e3bdc47dfd0c85a1888067d95a0282c
Author: Paul E. McKenney 
AuthorDate: Wed, 7 Nov 2018 15:25:13 -0800
Committer:  Paul E. McKenney 
CommitDate: Sat, 1 Dec 2018 12:38:51 -0800

tools/kernel.h: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu().  This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney 
Cc: Matthew Wilcox 
Cc: 
---
 tools/include/linux/kernel.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/include/linux/kernel.h b/tools/include/linux/kernel.h
index 6935ef94e77a..857d9e22826e 100644
--- a/tools/include/linux/kernel.h
+++ b/tools/include/linux/kernel.h
@@ -116,6 +116,6 @@ int scnprintf(char * buf, size_t size, const char * fmt, 
...);
 #define round_down(x, y) ((x) & ~__round_mask(x, y))
 
 #define current_gfp_context(k) 0
-#define synchronize_sched()
+#define synchronize_rcu()
 
 #endif


[tip:core/rcu] tools/kernel.h: Replace synchronize_sched() with synchronize_rcu()

2018-12-04 Thread tip-bot for Paul E. McKenney
Commit-ID:  4a67e3a79e3bdc47dfd0c85a1888067d95a0282c
Gitweb: https://git.kernel.org/tip/4a67e3a79e3bdc47dfd0c85a1888067d95a0282c
Author: Paul E. McKenney 
AuthorDate: Wed, 7 Nov 2018 15:25:13 -0800
Committer:  Paul E. McKenney 
CommitDate: Sat, 1 Dec 2018 12:38:51 -0800

tools/kernel.h: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu().  This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney 
Cc: Matthew Wilcox 
Cc: 
---
 tools/include/linux/kernel.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/include/linux/kernel.h b/tools/include/linux/kernel.h
index 6935ef94e77a..857d9e22826e 100644
--- a/tools/include/linux/kernel.h
+++ b/tools/include/linux/kernel.h
@@ -116,6 +116,6 @@ int scnprintf(char * buf, size_t size, const char * fmt, 
...);
 #define round_down(x, y) ((x) & ~__round_mask(x, y))
 
 #define current_gfp_context(k) 0
-#define synchronize_sched()
+#define synchronize_rcu()
 
 #endif


[tip:core/rcu] tracing: Replace synchronize_sched() and call_rcu_sched()

2018-12-04 Thread tip-bot for Paul E. McKenney
Commit-ID:  7440172974e85b1828bdd84ac6b23b5bcad9c5eb
Gitweb: https://git.kernel.org/tip/7440172974e85b1828bdd84ac6b23b5bcad9c5eb
Author: Paul E. McKenney 
AuthorDate: Tue, 6 Nov 2018 18:44:52 -0800
Committer:  Paul E. McKenney 
CommitDate: Tue, 27 Nov 2018 09:21:41 -0800

tracing: Replace synchronize_sched() and call_rcu_sched()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can
be replaced by synchronize_rcu().  Similarly, call_rcu_sched() can be
replaced by call_rcu().  This commit therefore makes these changes.

Signed-off-by: Paul E. McKenney 
Cc: Ingo Molnar 
Cc: 
Acked-by: Steven Rostedt (VMware) 
---
 include/linux/tracepoint.h |  2 +-
 kernel/trace/ftrace.c  | 24 
 kernel/trace/ring_buffer.c | 12 ++--
 kernel/trace/trace.c   | 10 +-
 kernel/trace/trace_events_filter.c |  4 ++--
 kernel/trace/trace_kprobe.c|  2 +-
 kernel/tracepoint.c|  4 ++--
 7 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 538ba1a58f5b..432080b59c26 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -82,7 +82,7 @@ int unregister_tracepoint_module_notifier(struct 
notifier_block *nb)
 static inline void tracepoint_synchronize_unregister(void)
 {
synchronize_srcu(_srcu);
-   synchronize_sched();
+   synchronize_rcu();
 }
 #else
 static inline void tracepoint_synchronize_unregister(void)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index f536f601bd46..5b4f73e4fd56 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -173,7 +173,7 @@ static void ftrace_sync(struct work_struct *work)
 {
/*
 * This function is just a stub to implement a hard force
-* of synchronize_sched(). This requires synchronizing
+* of synchronize_rcu(). This requires synchronizing
 * tasks even in userspace and idle.
 *
 * Yes, function tracing is rude.
@@ -934,7 +934,7 @@ ftrace_profile_write(struct file *filp, const char __user 
*ubuf,
ftrace_profile_enabled = 0;
/*
 * unregister_ftrace_profiler calls stop_machine
-* so this acts like an synchronize_sched.
+* so this acts like an synchronize_rcu.
 */
unregister_ftrace_profiler();
}
@@ -1086,7 +1086,7 @@ struct ftrace_ops *ftrace_ops_trampoline(unsigned long 
addr)
 
/*
 * Some of the ops may be dynamically allocated,
-* they are freed after a synchronize_sched().
+* they are freed after a synchronize_rcu().
 */
preempt_disable_notrace();
 
@@ -1286,7 +1286,7 @@ static void free_ftrace_hash_rcu(struct ftrace_hash *hash)
 {
if (!hash || hash == EMPTY_HASH)
return;
-   call_rcu_sched(>rcu, __free_ftrace_hash_rcu);
+   call_rcu(>rcu, __free_ftrace_hash_rcu);
 }
 
 void ftrace_free_filter(struct ftrace_ops *ops)
@@ -1501,7 +1501,7 @@ static bool hash_contains_ip(unsigned long ip,
  * the ip is not in the ops->notrace_hash.
  *
  * This needs to be called with preemption disabled as
- * the hashes are freed with call_rcu_sched().
+ * the hashes are freed with call_rcu().
  */
 static int
 ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs)
@@ -4496,7 +4496,7 @@ unregister_ftrace_function_probe_func(char *glob, struct 
trace_array *tr,
if (ftrace_enabled && !ftrace_hash_empty(hash))
ftrace_run_modify_code(>ops, FTRACE_UPDATE_CALLS,
   _hash_ops);
-   synchronize_sched();
+   synchronize_rcu();
 
hlist_for_each_entry_safe(entry, tmp, , hlist) {
hlist_del(>hlist);
@@ -5314,7 +5314,7 @@ ftrace_graph_release(struct inode *inode, struct file 
*file)
mutex_unlock(_lock);
 
/* Wait till all users are no longer using the old hash */
-   synchronize_sched();
+   synchronize_rcu();
 
free_ftrace_hash(old_hash);
}
@@ -5707,7 +5707,7 @@ void ftrace_release_mod(struct module *mod)
list_for_each_entry_safe(mod_map, n, _mod_maps, list) {
if (mod_map->mod == mod) {
list_del_rcu(_map->list);
-   call_rcu_sched(_map->rcu, ftrace_free_mod_map);
+   call_rcu(_map->rcu, ftrace_free_mod_map);
break;
}
}
@@ -5927,7 +5927,7 @@ ftrace_mod_address_lookup(unsigned long addr, unsigned 
long *size,
struct ftrace_mod_map *mod_map;
const char *ret = NULL;
 
-   /* mod_map is freed via call_rcu_sched() */
+   /* mod_map is freed via call_rcu() */
   

[tip:core/rcu] tracing: Replace synchronize_sched() and call_rcu_sched()

2018-12-04 Thread tip-bot for Paul E. McKenney
Commit-ID:  7440172974e85b1828bdd84ac6b23b5bcad9c5eb
Gitweb: https://git.kernel.org/tip/7440172974e85b1828bdd84ac6b23b5bcad9c5eb
Author: Paul E. McKenney 
AuthorDate: Tue, 6 Nov 2018 18:44:52 -0800
Committer:  Paul E. McKenney 
CommitDate: Tue, 27 Nov 2018 09:21:41 -0800

tracing: Replace synchronize_sched() and call_rcu_sched()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can
be replaced by synchronize_rcu().  Similarly, call_rcu_sched() can be
replaced by call_rcu().  This commit therefore makes these changes.

Signed-off-by: Paul E. McKenney 
Cc: Ingo Molnar 
Cc: 
Acked-by: Steven Rostedt (VMware) 
---
 include/linux/tracepoint.h |  2 +-
 kernel/trace/ftrace.c  | 24 
 kernel/trace/ring_buffer.c | 12 ++--
 kernel/trace/trace.c   | 10 +-
 kernel/trace/trace_events_filter.c |  4 ++--
 kernel/trace/trace_kprobe.c|  2 +-
 kernel/tracepoint.c|  4 ++--
 7 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 538ba1a58f5b..432080b59c26 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -82,7 +82,7 @@ int unregister_tracepoint_module_notifier(struct 
notifier_block *nb)
 static inline void tracepoint_synchronize_unregister(void)
 {
synchronize_srcu(_srcu);
-   synchronize_sched();
+   synchronize_rcu();
 }
 #else
 static inline void tracepoint_synchronize_unregister(void)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index f536f601bd46..5b4f73e4fd56 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -173,7 +173,7 @@ static void ftrace_sync(struct work_struct *work)
 {
/*
 * This function is just a stub to implement a hard force
-* of synchronize_sched(). This requires synchronizing
+* of synchronize_rcu(). This requires synchronizing
 * tasks even in userspace and idle.
 *
 * Yes, function tracing is rude.
@@ -934,7 +934,7 @@ ftrace_profile_write(struct file *filp, const char __user 
*ubuf,
ftrace_profile_enabled = 0;
/*
 * unregister_ftrace_profiler calls stop_machine
-* so this acts like an synchronize_sched.
+* so this acts like an synchronize_rcu.
 */
unregister_ftrace_profiler();
}
@@ -1086,7 +1086,7 @@ struct ftrace_ops *ftrace_ops_trampoline(unsigned long 
addr)
 
/*
 * Some of the ops may be dynamically allocated,
-* they are freed after a synchronize_sched().
+* they are freed after a synchronize_rcu().
 */
preempt_disable_notrace();
 
@@ -1286,7 +1286,7 @@ static void free_ftrace_hash_rcu(struct ftrace_hash *hash)
 {
if (!hash || hash == EMPTY_HASH)
return;
-   call_rcu_sched(>rcu, __free_ftrace_hash_rcu);
+   call_rcu(>rcu, __free_ftrace_hash_rcu);
 }
 
 void ftrace_free_filter(struct ftrace_ops *ops)
@@ -1501,7 +1501,7 @@ static bool hash_contains_ip(unsigned long ip,
  * the ip is not in the ops->notrace_hash.
  *
  * This needs to be called with preemption disabled as
- * the hashes are freed with call_rcu_sched().
+ * the hashes are freed with call_rcu().
  */
 static int
 ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs)
@@ -4496,7 +4496,7 @@ unregister_ftrace_function_probe_func(char *glob, struct 
trace_array *tr,
if (ftrace_enabled && !ftrace_hash_empty(hash))
ftrace_run_modify_code(>ops, FTRACE_UPDATE_CALLS,
   _hash_ops);
-   synchronize_sched();
+   synchronize_rcu();
 
hlist_for_each_entry_safe(entry, tmp, , hlist) {
hlist_del(>hlist);
@@ -5314,7 +5314,7 @@ ftrace_graph_release(struct inode *inode, struct file 
*file)
mutex_unlock(_lock);
 
/* Wait till all users are no longer using the old hash */
-   synchronize_sched();
+   synchronize_rcu();
 
free_ftrace_hash(old_hash);
}
@@ -5707,7 +5707,7 @@ void ftrace_release_mod(struct module *mod)
list_for_each_entry_safe(mod_map, n, _mod_maps, list) {
if (mod_map->mod == mod) {
list_del_rcu(_map->list);
-   call_rcu_sched(_map->rcu, ftrace_free_mod_map);
+   call_rcu(_map->rcu, ftrace_free_mod_map);
break;
}
}
@@ -5927,7 +5927,7 @@ ftrace_mod_address_lookup(unsigned long addr, unsigned 
long *size,
struct ftrace_mod_map *mod_map;
const char *ret = NULL;
 
-   /* mod_map is freed via call_rcu_sched() */
+   /* mod_map is freed via call_rcu() */
   

[tip:locking/core] tools/memory-model: Make scripts take "-j" abbreviation for "--jobs"

2018-12-03 Thread tip-bot for Paul E. McKenney
Commit-ID:  a6f1de04276d036b61c4d1dbd0367e6b430d8783
Gitweb: https://git.kernel.org/tip/a6f1de04276d036b61c4d1dbd0367e6b430d8783
Author: Paul E. McKenney 
AuthorDate: Mon, 3 Dec 2018 15:04:51 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 4 Dec 2018 07:29:52 +0100

tools/memory-model: Make scripts take "-j" abbreviation for "--jobs"

The "--jobs" argument to the litmus-test scripts is similar to the "-jN"
argument to "make", so this commit allows the "-jN" form as well.  While
in the area, it also prohibits the various forms of "-j0".

Suggested-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20181203230451.28921-3-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/scripts/parseargs.sh | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/scripts/parseargs.sh 
b/tools/memory-model/scripts/parseargs.sh
index 96b307c8d64a..859e1d581e05 100644
--- a/tools/memory-model/scripts/parseargs.sh
+++ b/tools/memory-model/scripts/parseargs.sh
@@ -95,8 +95,18 @@ do
LKMM_HERD_OPTIONS="$2"
shift
;;
-   --jobs|--job)
-   checkarg --jobs "(number)" "$#" "$2" '^[0-9]\+$' '^--'
+   -j[1-9]*)
+   njobs="`echo $1 | sed -e 's/^-j//'`"
+   trailchars="`echo $njobs | sed -e 's/[0-9]\+\(.*\)$/\1/'`"
+   if test -n "$trailchars"
+   then
+   echo $1 trailing characters "'$trailchars'"
+   usagehelp
+   fi
+   LKMM_JOBS="`echo $njobs | sed -e 's/^\([0-9]\+\).*$/\1/'`"
+   ;;
+   --jobs|--job|-j)
+   checkarg --jobs "(number)" "$#" "$2" '^[1-9][0-9]\+$' '^--'
LKMM_JOBS="$2"
shift
;;


[tip:locking/core] tools/memory-model: Make scripts take "-j" abbreviation for "--jobs"

2018-12-03 Thread tip-bot for Paul E. McKenney
Commit-ID:  a6f1de04276d036b61c4d1dbd0367e6b430d8783
Gitweb: https://git.kernel.org/tip/a6f1de04276d036b61c4d1dbd0367e6b430d8783
Author: Paul E. McKenney 
AuthorDate: Mon, 3 Dec 2018 15:04:51 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 4 Dec 2018 07:29:52 +0100

tools/memory-model: Make scripts take "-j" abbreviation for "--jobs"

The "--jobs" argument to the litmus-test scripts is similar to the "-jN"
argument to "make", so this commit allows the "-jN" form as well.  While
in the area, it also prohibits the various forms of "-j0".

Suggested-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20181203230451.28921-3-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/scripts/parseargs.sh | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/scripts/parseargs.sh 
b/tools/memory-model/scripts/parseargs.sh
index 96b307c8d64a..859e1d581e05 100644
--- a/tools/memory-model/scripts/parseargs.sh
+++ b/tools/memory-model/scripts/parseargs.sh
@@ -95,8 +95,18 @@ do
LKMM_HERD_OPTIONS="$2"
shift
;;
-   --jobs|--job)
-   checkarg --jobs "(number)" "$#" "$2" '^[0-9]\+$' '^--'
+   -j[1-9]*)
+   njobs="`echo $1 | sed -e 's/^-j//'`"
+   trailchars="`echo $njobs | sed -e 's/[0-9]\+\(.*\)$/\1/'`"
+   if test -n "$trailchars"
+   then
+   echo $1 trailing characters "'$trailchars'"
+   usagehelp
+   fi
+   LKMM_JOBS="`echo $njobs | sed -e 's/^\([0-9]\+\).*$/\1/'`"
+   ;;
+   --jobs|--job|-j)
+   checkarg --jobs "(number)" "$#" "$2" '^[1-9][0-9]\+$' '^--'
LKMM_JOBS="$2"
shift
;;


[tip:locking/core] tools/memory-model: Add scripts to check github litmus tests

2018-12-03 Thread tip-bot for Paul E. McKenney
Commit-ID:  e188d24a382d609ec7ca6c1a00396202565b7831
Gitweb: https://git.kernel.org/tip/e188d24a382d609ec7ca6c1a00396202565b7831
Author: Paul E. McKenney 
AuthorDate: Mon, 3 Dec 2018 15:04:50 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 4 Dec 2018 07:29:52 +0100

tools/memory-model: Add scripts to check github litmus tests

The https://github.com/paulmckrcu/litmus repository contains a large
number of C-language litmus tests that include "Result:" comments
predicting the verification result.  This commit adds a number of scripts
that run tests on these litmus tests:

checkghlitmus.sh:
Runs all litmus tests in the https://github.com/paulmckrcu/litmus
archive that are C-language and that have "Result:" comment lines
documenting expected results, comparing the actual results to
those expected.  Clones the repository if it has not already
been cloned into the "tools/memory-model/litmus" directory.

initlitmushist.sh
Run all litmus tests having no more than the specified number
of processes given a specified timeout, recording the results in
.litmus.out files.  Clones the repository if it has not already
been cloned into the "tools/memory-model/litmus" directory.

newlitmushist.sh
For all new or updated litmus tests having no more than the
specified number of processes given a specified timeout, run
and record the results in .litmus.out files.

checklitmushist.sh
Run all litmus tests having .litmus.out files from previous
initlitmushist.sh or newlitmushist.sh runs, comparing the
herd output to that of the original runs.

The above scripts will run litmus tests concurrently, by default with
one job per available CPU.  Giving any of these scripts the --help
argument will cause them to print usage information.

This commit also adds a number of helper scripts that are not intended
to be invoked from the command line:

cmplitmushist.sh: Compare the output of two different runs of the same
litmus test.

judgelitmus.sh: Compare the output of a litmus test to its "Result:"
comment line.

parseargs.sh: Parse command-line arguments.

runlitmushist.sh: Run the litmus tests whose pathnames are provided one
per line on standard input.

While in the area, this commit also makes the existing checklitmus.sh
and checkalllitmus.sh scripts use parseargs.sh in order to provide a
bit of uniformity.  In addition, per-litmus-test status output is directed
to stdout, while end-of-test summary information is directed to stderr.
Finally, the error flag standardizes on "!!!" to assist those familiar
with rcutorture output.

The defaults for the parseargs.sh arguments may be overridden by using
environment variables: LKMM_DESTDIR for --destdir, LKMM_HERD_OPTIONS
for --herdoptions, LKMM_JOBS for --jobs, LKMM_PROCS for --procs, and
LKMM_TIMEOUT for --timeout.

[ paulmck: History-check summary-line changes per Alan Stern feedback. ]
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20181203230451.28921-2-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/.gitignore |   1 +
 tools/memory-model/README |   2 +
 tools/memory-model/scripts/README |  70 ++
 tools/memory-model/scripts/checkalllitmus.sh  |  53 +--
 tools/memory-model/scripts/checkghlitmus.sh   |  65 +
 tools/memory-model/scripts/checklitmus.sh |  74 +++
 tools/memory-model/scripts/checklitmushist.sh |  60 
 tools/memory-model/scripts/cmplitmushist.sh   |  87 ++
 tools/memory-model/scripts/initlitmushist.sh  |  68 ++
 tools/memory-model/scripts/judgelitmus.sh |  78 
 tools/memory-model/scripts/newlitmushist.sh   |  61 +
 tools/memory-model/scripts/parseargs.sh   | 126 ++
 tools/memory-model/scripts/runlitmushist.sh   |  87 ++
 13 files changed, 739 insertions(+), 93 deletions(-)

diff --git a/tools/memory-model/.gitignore b/tools/memory-model/.gitignore
new file mode 100644
index ..b1d34c52f3c3
--- /dev/null
+++ b/tools/memory-model/.gitignore
@@ -0,0 +1 @@
+litmus
diff --git a/tools/memory-model/README b/tools/memory-model/README
index acf9077cffaa..0f2c366518c6 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -156,6 +156,8 @@ lock.cat
 README
This file.
 
+scriptsVarious scripts, see scripts/README.
+
 
 ===
 LIMITATIONS
diff --git a/tools/memory-model/scripts/README 
b/tools/memory-model/scripts/README
new file 

[tip:locking/core] tools/memory-model: Add scripts to check github litmus tests

2018-12-03 Thread tip-bot for Paul E. McKenney
Commit-ID:  e188d24a382d609ec7ca6c1a00396202565b7831
Gitweb: https://git.kernel.org/tip/e188d24a382d609ec7ca6c1a00396202565b7831
Author: Paul E. McKenney 
AuthorDate: Mon, 3 Dec 2018 15:04:50 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 4 Dec 2018 07:29:52 +0100

tools/memory-model: Add scripts to check github litmus tests

The https://github.com/paulmckrcu/litmus repository contains a large
number of C-language litmus tests that include "Result:" comments
predicting the verification result.  This commit adds a number of scripts
that run tests on these litmus tests:

checkghlitmus.sh:
Runs all litmus tests in the https://github.com/paulmckrcu/litmus
archive that are C-language and that have "Result:" comment lines
documenting expected results, comparing the actual results to
those expected.  Clones the repository if it has not already
been cloned into the "tools/memory-model/litmus" directory.

initlitmushist.sh
Run all litmus tests having no more than the specified number
of processes given a specified timeout, recording the results in
.litmus.out files.  Clones the repository if it has not already
been cloned into the "tools/memory-model/litmus" directory.

newlitmushist.sh
For all new or updated litmus tests having no more than the
specified number of processes given a specified timeout, run
and record the results in .litmus.out files.

checklitmushist.sh
Run all litmus tests having .litmus.out files from previous
initlitmushist.sh or newlitmushist.sh runs, comparing the
herd output to that of the original runs.

The above scripts will run litmus tests concurrently, by default with
one job per available CPU.  Giving any of these scripts the --help
argument will cause them to print usage information.

This commit also adds a number of helper scripts that are not intended
to be invoked from the command line:

cmplitmushist.sh: Compare the output of two different runs of the same
litmus test.

judgelitmus.sh: Compare the output of a litmus test to its "Result:"
comment line.

parseargs.sh: Parse command-line arguments.

runlitmushist.sh: Run the litmus tests whose pathnames are provided one
per line on standard input.

While in the area, this commit also makes the existing checklitmus.sh
and checkalllitmus.sh scripts use parseargs.sh in order to provide a
bit of uniformity.  In addition, per-litmus-test status output is directed
to stdout, while end-of-test summary information is directed to stderr.
Finally, the error flag standardizes on "!!!" to assist those familiar
with rcutorture output.

The defaults for the parseargs.sh arguments may be overridden by using
environment variables: LKMM_DESTDIR for --destdir, LKMM_HERD_OPTIONS
for --herdoptions, LKMM_JOBS for --jobs, LKMM_PROCS for --procs, and
LKMM_TIMEOUT for --timeout.

[ paulmck: History-check summary-line changes per Alan Stern feedback. ]
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20181203230451.28921-2-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/.gitignore |   1 +
 tools/memory-model/README |   2 +
 tools/memory-model/scripts/README |  70 ++
 tools/memory-model/scripts/checkalllitmus.sh  |  53 +--
 tools/memory-model/scripts/checkghlitmus.sh   |  65 +
 tools/memory-model/scripts/checklitmus.sh |  74 +++
 tools/memory-model/scripts/checklitmushist.sh |  60 
 tools/memory-model/scripts/cmplitmushist.sh   |  87 ++
 tools/memory-model/scripts/initlitmushist.sh  |  68 ++
 tools/memory-model/scripts/judgelitmus.sh |  78 
 tools/memory-model/scripts/newlitmushist.sh   |  61 +
 tools/memory-model/scripts/parseargs.sh   | 126 ++
 tools/memory-model/scripts/runlitmushist.sh   |  87 ++
 13 files changed, 739 insertions(+), 93 deletions(-)

diff --git a/tools/memory-model/.gitignore b/tools/memory-model/.gitignore
new file mode 100644
index ..b1d34c52f3c3
--- /dev/null
+++ b/tools/memory-model/.gitignore
@@ -0,0 +1 @@
+litmus
diff --git a/tools/memory-model/README b/tools/memory-model/README
index acf9077cffaa..0f2c366518c6 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -156,6 +156,8 @@ lock.cat
 README
This file.
 
+scriptsVarious scripts, see scripts/README.
+
 
 ===
 LIMITATIONS
diff --git a/tools/memory-model/scripts/README 
b/tools/memory-model/scripts/README
new file 

[tip:locking/core] tools/memory-model: Add more LKMM limitations

2018-10-02 Thread tip-bot for Paul E. McKenney
Commit-ID:  d8fa25c4efde0e5f31a427202e583d73d3f021c4
Gitweb: https://git.kernel.org/tip/d8fa25c4efde0e5f31a427202e583d73d3f021c4
Author: Paul E. McKenney 
AuthorDate: Wed, 26 Sep 2018 11:29:19 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 2 Oct 2018 10:28:04 +0200

tools/memory-model: Add more LKMM limitations

This commit adds more detail about compiler optimizations and
not-yet-modeled Linux-kernel APIs.

Signed-off-by: Paul E. McKenney 
Reviewed-by: Andrea Parri 
Cc: Alexander Shishkin 
Cc: Arnaldo Carvalho de Melo 
Cc: Jiri Olsa 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Cc: Vince Weaver 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20180926182920.27644-4-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/README | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/tools/memory-model/README b/tools/memory-model/README
index ee987ce20aae..acf9077cffaa 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -171,6 +171,12 @@ The Linux-kernel memory model has the following 
limitations:
particular, the "THE PROGRAM ORDER RELATION: po AND po-loc"
and "A WARNING" sections).
 
+   Note that this limitation in turn limits LKMM's ability to
+   accurately model address, control, and data dependencies.
+   For example, if the compiler can deduce the value of some variable
+   carrying a dependency, then the compiler can break that dependency
+   by substituting a constant of that value.
+
 2. Multiple access sizes for a single variable are not supported,
and neither are misaligned or partially overlapping accesses.
 
@@ -190,6 +196,36 @@ The Linux-kernel memory model has the following 
limitations:
However, a substantial amount of support is provided for these
operations, as shown in the linux-kernel.def file.
 
+   a.  When rcu_assign_pointer() is passed NULL, the Linux
+   kernel provides no ordering, but LKMM models this
+   case as a store release.
+
+   b.  The "unless" RMW operations are not currently modeled:
+   atomic_long_add_unless(), atomic_add_unless(),
+   atomic_inc_unless_negative(), and
+   atomic_dec_unless_positive().  These can be emulated
+   in litmus tests, for example, by using atomic_cmpxchg().
+
+   c.  The call_rcu() function is not modeled.  It can be
+   emulated in litmus tests by adding another process that
+   invokes synchronize_rcu() and the body of the callback
+   function, with (for example) a release-acquire from
+   the site of the emulated call_rcu() to the beginning
+   of the additional process.
+
+   d.  The rcu_barrier() function is not modeled.  It can be
+   emulated in litmus tests emulating call_rcu() via
+   (for example) a release-acquire from the end of each
+   additional call_rcu() process to the site of the
+   emulated rcu-barrier().
+
+   e.  Sleepable RCU (SRCU) is not modeled.  It can be
+   emulated, but perhaps not simply.
+
+   f.  Reader-writer locking is not modeled.  It can be
+   emulated in litmus tests using atomic read-modify-write
+   operations.
+
 The "herd7" tool has some additional limitations of its own, apart from
 the memory model:
 
@@ -204,3 +240,6 @@ the memory model:
 Some of these limitations may be overcome in the future, but others are
 more likely to be addressed by incorporating the Linux-kernel memory model
 into other tools.
+
+Finally, please note that LKMM is subject to change as hardware, use cases,
+and compilers evolve.


[tip:locking/core] tools/memory-model: Add more LKMM limitations

2018-10-02 Thread tip-bot for Paul E. McKenney
Commit-ID:  d8fa25c4efde0e5f31a427202e583d73d3f021c4
Gitweb: https://git.kernel.org/tip/d8fa25c4efde0e5f31a427202e583d73d3f021c4
Author: Paul E. McKenney 
AuthorDate: Wed, 26 Sep 2018 11:29:19 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 2 Oct 2018 10:28:04 +0200

tools/memory-model: Add more LKMM limitations

This commit adds more detail about compiler optimizations and
not-yet-modeled Linux-kernel APIs.

Signed-off-by: Paul E. McKenney 
Reviewed-by: Andrea Parri 
Cc: Alexander Shishkin 
Cc: Arnaldo Carvalho de Melo 
Cc: Jiri Olsa 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Cc: Vince Weaver 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20180926182920.27644-4-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/README | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/tools/memory-model/README b/tools/memory-model/README
index ee987ce20aae..acf9077cffaa 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -171,6 +171,12 @@ The Linux-kernel memory model has the following 
limitations:
particular, the "THE PROGRAM ORDER RELATION: po AND po-loc"
and "A WARNING" sections).
 
+   Note that this limitation in turn limits LKMM's ability to
+   accurately model address, control, and data dependencies.
+   For example, if the compiler can deduce the value of some variable
+   carrying a dependency, then the compiler can break that dependency
+   by substituting a constant of that value.
+
 2. Multiple access sizes for a single variable are not supported,
and neither are misaligned or partially overlapping accesses.
 
@@ -190,6 +196,36 @@ The Linux-kernel memory model has the following 
limitations:
However, a substantial amount of support is provided for these
operations, as shown in the linux-kernel.def file.
 
+   a.  When rcu_assign_pointer() is passed NULL, the Linux
+   kernel provides no ordering, but LKMM models this
+   case as a store release.
+
+   b.  The "unless" RMW operations are not currently modeled:
+   atomic_long_add_unless(), atomic_add_unless(),
+   atomic_inc_unless_negative(), and
+   atomic_dec_unless_positive().  These can be emulated
+   in litmus tests, for example, by using atomic_cmpxchg().
+
+   c.  The call_rcu() function is not modeled.  It can be
+   emulated in litmus tests by adding another process that
+   invokes synchronize_rcu() and the body of the callback
+   function, with (for example) a release-acquire from
+   the site of the emulated call_rcu() to the beginning
+   of the additional process.
+
+   d.  The rcu_barrier() function is not modeled.  It can be
+   emulated in litmus tests emulating call_rcu() via
+   (for example) a release-acquire from the end of each
+   additional call_rcu() process to the site of the
+   emulated rcu-barrier().
+
+   e.  Sleepable RCU (SRCU) is not modeled.  It can be
+   emulated, but perhaps not simply.
+
+   f.  Reader-writer locking is not modeled.  It can be
+   emulated in litmus tests using atomic read-modify-write
+   operations.
+
 The "herd7" tool has some additional limitations of its own, apart from
 the memory model:
 
@@ -204,3 +240,6 @@ the memory model:
 Some of these limitations may be overcome in the future, but others are
 more likely to be addressed by incorporating the Linux-kernel memory model
 into other tools.
+
+Finally, please note that LKMM is subject to change as hardware, use cases,
+and compilers evolve.


[tip:locking/core] tools/memory-model: Add litmus-test naming scheme

2018-10-02 Thread tip-bot for Paul E. McKenney
Commit-ID:  c4f790f244070dbab486805276ba4d1f87a057af
Gitweb: https://git.kernel.org/tip/c4f790f244070dbab486805276ba4d1f87a057af
Author: Paul E. McKenney 
AuthorDate: Wed, 26 Sep 2018 11:29:16 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 2 Oct 2018 10:28:00 +0200

tools/memory-model: Add litmus-test naming scheme

This commit documents the scheme used to generate the names for the
litmus tests.

[ paulmck: Apply feedback from Andrea Parri and Will Deacon. ]
Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Cc: Alexander Shishkin 
Cc: Arnaldo Carvalho de Melo 
Cc: Jiri Olsa 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Cc: Vince Weaver 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: http://lkml.kernel.org/r/20180926182920.27644-1-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/README | 104 -
 1 file changed, 102 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index 4581ec2d3c57..5ee08f129094 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -1,4 +1,6 @@
-This directory contains the following litmus tests:
+
+LITMUS TESTS
+
 
 CoRR+poonceonce+Once.litmus
Test of read-read coherence, that is, whether or not two
@@ -36,7 +38,7 @@ IRIW+poonceonces+OnceOnce.litmus
 ISA2+pooncelock+pooncelock+pombonce.litmus
Tests whether the ordering provided by a lock-protected S
litmus test is visible to an external process whose accesses are
-   separated by smp_mb().  This addition of an external process to
+   separated by smp_mb().  This addition of an external process to
S is otherwise known as ISA2.
 
 ISA2+poonceonces.litmus
@@ -151,3 +153,101 @@ Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus
 A great many more litmus tests are available here:
 
https://github.com/paulmckrcu/litmus
+
+==
+LITMUS TEST NAMING
+==
+
+Litmus tests are usually named based on their contents, which means that
+looking at the name tells you what the litmus test does.  The naming
+scheme covers litmus tests having a single cycle that passes through
+each process exactly once, so litmus tests not fitting this description
+are named on an ad-hoc basis.
+
+The structure of a litmus-test name is the litmus-test class, a plus
+sign ("+"), and one string for each process, separated by plus signs.
+The end of the name is ".litmus".
+
+The litmus-test classes may be found in the infamous test6.pdf:
+https://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test6.pdf
+Each class defines the pattern of accesses and of the variables accessed.
+For example, if the one process writes to a pair of variables, and
+the other process reads from these same variables, the corresponding
+litmus-test class is "MP" (message passing), which may be found on the
+left-hand end of the second row of tests on page one of test6.pdf.
+
+The strings used to identify the actions carried out by each process are
+complex due to a desire to have short(er) names.  Thus, there is a tool to
+generate these strings from a given litmus test's actions.  For example,
+consider the processes from SB+rfionceonce-poonceonces.litmus:
+
+   P0(int *x, int *y)
+   {
+   int r1;
+   int r2;
+
+   WRITE_ONCE(*x, 1);
+   r1 = READ_ONCE(*x);
+   r2 = READ_ONCE(*y);
+   }
+
+   P1(int *x, int *y)
+   {
+   int r3;
+   int r4;
+
+   WRITE_ONCE(*y, 1);
+   r3 = READ_ONCE(*y);
+   r4 = READ_ONCE(*x);
+   }
+
+The next step is to construct a space-separated list of descriptors,
+interleaving descriptions of the relation between a pair of consecutive
+accesses with descriptions of the second access in the pair.
+
+P0()'s WRITE_ONCE() is read by its first READ_ONCE(), which is a
+reads-from link (rf) and internal to the P0() process.  This is
+"rfi", which is an abbreviation for "reads-from internal".  Because
+some of the tools string these abbreviations together with space
+characters separating processes, the first character is capitalized,
+resulting in "Rfi".
+
+P0()'s second access is a READ_ONCE(), as opposed to (for example)
+smp_load_acquire(), so next is "Once".  Thus far, we have "Rfi Once".
+
+P0()'s third access is also a READ_ONCE(), but to y rather than x.
+This is related to P0()'s second access by program order ("po"),
+to a different variable ("d"), and both accesses are reads ("RR").
+The resulting descriptor is "PodRR".  Because P0()'s third access is
+READ_ONCE(), we add another "Once" descriptor.
+
+A 

[tip:locking/core] tools/memory-model: Add litmus-test naming scheme

2018-10-02 Thread tip-bot for Paul E. McKenney
Commit-ID:  c4f790f244070dbab486805276ba4d1f87a057af
Gitweb: https://git.kernel.org/tip/c4f790f244070dbab486805276ba4d1f87a057af
Author: Paul E. McKenney 
AuthorDate: Wed, 26 Sep 2018 11:29:16 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 2 Oct 2018 10:28:00 +0200

tools/memory-model: Add litmus-test naming scheme

This commit documents the scheme used to generate the names for the
litmus tests.

[ paulmck: Apply feedback from Andrea Parri and Will Deacon. ]
Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Cc: Alexander Shishkin 
Cc: Arnaldo Carvalho de Melo 
Cc: Jiri Olsa 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Cc: Vince Weaver 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: http://lkml.kernel.org/r/20180926182920.27644-1-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/README | 104 -
 1 file changed, 102 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index 4581ec2d3c57..5ee08f129094 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -1,4 +1,6 @@
-This directory contains the following litmus tests:
+
+LITMUS TESTS
+
 
 CoRR+poonceonce+Once.litmus
Test of read-read coherence, that is, whether or not two
@@ -36,7 +38,7 @@ IRIW+poonceonces+OnceOnce.litmus
 ISA2+pooncelock+pooncelock+pombonce.litmus
Tests whether the ordering provided by a lock-protected S
litmus test is visible to an external process whose accesses are
-   separated by smp_mb().  This addition of an external process to
+   separated by smp_mb().  This addition of an external process to
S is otherwise known as ISA2.
 
 ISA2+poonceonces.litmus
@@ -151,3 +153,101 @@ Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus
 A great many more litmus tests are available here:
 
https://github.com/paulmckrcu/litmus
+
+==
+LITMUS TEST NAMING
+==
+
+Litmus tests are usually named based on their contents, which means that
+looking at the name tells you what the litmus test does.  The naming
+scheme covers litmus tests having a single cycle that passes through
+each process exactly once, so litmus tests not fitting this description
+are named on an ad-hoc basis.
+
+The structure of a litmus-test name is the litmus-test class, a plus
+sign ("+"), and one string for each process, separated by plus signs.
+The end of the name is ".litmus".
+
+The litmus-test classes may be found in the infamous test6.pdf:
+https://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test6.pdf
+Each class defines the pattern of accesses and of the variables accessed.
+For example, if the one process writes to a pair of variables, and
+the other process reads from these same variables, the corresponding
+litmus-test class is "MP" (message passing), which may be found on the
+left-hand end of the second row of tests on page one of test6.pdf.
+
+The strings used to identify the actions carried out by each process are
+complex due to a desire to have short(er) names.  Thus, there is a tool to
+generate these strings from a given litmus test's actions.  For example,
+consider the processes from SB+rfionceonce-poonceonces.litmus:
+
+   P0(int *x, int *y)
+   {
+   int r1;
+   int r2;
+
+   WRITE_ONCE(*x, 1);
+   r1 = READ_ONCE(*x);
+   r2 = READ_ONCE(*y);
+   }
+
+   P1(int *x, int *y)
+   {
+   int r3;
+   int r4;
+
+   WRITE_ONCE(*y, 1);
+   r3 = READ_ONCE(*y);
+   r4 = READ_ONCE(*x);
+   }
+
+The next step is to construct a space-separated list of descriptors,
+interleaving descriptions of the relation between a pair of consecutive
+accesses with descriptions of the second access in the pair.
+
+P0()'s WRITE_ONCE() is read by its first READ_ONCE(), which is a
+reads-from link (rf) and internal to the P0() process.  This is
+"rfi", which is an abbreviation for "reads-from internal".  Because
+some of the tools string these abbreviations together with space
+characters separating processes, the first character is capitalized,
+resulting in "Rfi".
+
+P0()'s second access is a READ_ONCE(), as opposed to (for example)
+smp_load_acquire(), so next is "Once".  Thus far, we have "Rfi Once".
+
+P0()'s third access is also a READ_ONCE(), but to y rather than x.
+This is related to P0()'s second access by program order ("po"),
+to a different variable ("d"), and both accesses are reads ("RR").
+The resulting descriptor is "PodRR".  Because P0()'s third access is
+READ_ONCE(), we add another "Once" descriptor.
+
+A 

[tip:locking/core] tools/memory-model: Add informal LKMM documentation to MAINTAINERS

2018-07-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  70b83069f70d185356beba3202b9d167ee39f051
Gitweb: https://git.kernel.org/tip/70b83069f70d185356beba3202b9d167ee39f051
Author: Paul E. McKenney 
AuthorDate: Mon, 16 Jul 2018 11:06:00 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:32 +0200

tools/memory-model: Add informal LKMM documentation to MAINTAINERS

The Linux-kernel memory model has been informal, with a number of
text files documenting it.  It would be good to make sure that these
informal descriptions are kept up to date and/or pruned appropriately.
This commit therefore brings more of those text files into the LKMM
MAINTAINERS file entry.

Signed-off-by: Paul E. McKenney 
Acked-by: Andrea Parri 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: David S. Miller 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: http://lkml.kernel.org/r/20180716180605.16115-9-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 5 +
 1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index eb427596f3e0..900889a47627 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8318,9 +8318,14 @@ M:   "Paul E. McKenney" 
 R: Akira Yokosawa 
 R: Daniel Lustig 
 L: linux-kernel@vger.kernel.org
+L: linux-a...@vger.kernel.org
 S: Supported
 T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
 F: tools/memory-model/
+F: Documentation/atomic_bitops.txt
+F: Documentation/atomic_t.txt
+F: Documentation/core-api/atomic_ops.rst
+F: Documentation/core-api/refcount-vs-atomic.rst
 F: Documentation/memory-barriers.txt
 
 LINUX SECURITY MODULE (LSM) FRAMEWORK


[tip:locking/core] tools/memory-model: Add informal LKMM documentation to MAINTAINERS

2018-07-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  70b83069f70d185356beba3202b9d167ee39f051
Gitweb: https://git.kernel.org/tip/70b83069f70d185356beba3202b9d167ee39f051
Author: Paul E. McKenney 
AuthorDate: Mon, 16 Jul 2018 11:06:00 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:32 +0200

tools/memory-model: Add informal LKMM documentation to MAINTAINERS

The Linux-kernel memory model has been informal, with a number of
text files documenting it.  It would be good to make sure that these
informal descriptions are kept up to date and/or pruned appropriately.
This commit therefore brings more of those text files into the LKMM
MAINTAINERS file entry.

Signed-off-by: Paul E. McKenney 
Acked-by: Andrea Parri 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: David S. Miller 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: http://lkml.kernel.org/r/20180716180605.16115-9-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 5 +
 1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index eb427596f3e0..900889a47627 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8318,9 +8318,14 @@ M:   "Paul E. McKenney" 
 R: Akira Yokosawa 
 R: Daniel Lustig 
 L: linux-kernel@vger.kernel.org
+L: linux-a...@vger.kernel.org
 S: Supported
 T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
 F: tools/memory-model/
+F: Documentation/atomic_bitops.txt
+F: Documentation/atomic_t.txt
+F: Documentation/core-api/atomic_ops.rst
+F: Documentation/core-api/refcount-vs-atomic.rst
 F: Documentation/memory-barriers.txt
 
 LINUX SECURITY MODULE (LSM) FRAMEWORK


[tip:locking/core] tools/memory-model: Make scripts executable

2018-07-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  24675bb554f9e3d0e2bd61f6a7d8da50d224c8ff
Gitweb: https://git.kernel.org/tip/24675bb554f9e3d0e2bd61f6a7d8da50d224c8ff
Author: Paul E. McKenney 
AuthorDate: Mon, 16 Jul 2018 11:05:58 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:17 +0200

tools/memory-model: Make scripts executable

This commit makes the scripts executable to avoid the need for everyone
to do so manually in their archive.

Signed-off-by: Paul E. McKenney 
Acked-by: Akira Yokosawa 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20180716180605.16115-7-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/scripts/checkalllitmus.sh | 2 +-
 tools/memory-model/scripts/checklitmus.sh| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/scripts/checkalllitmus.sh 
b/tools/memory-model/scripts/checkalllitmus.sh
old mode 100644
new mode 100755
index af0aa15ab84e..ca528f9a24d4
--- a/tools/memory-model/scripts/checkalllitmus.sh
+++ b/tools/memory-model/scripts/checkalllitmus.sh
@@ -9,7 +9,7 @@
 # appended.
 #
 # Usage:
-#  sh checkalllitmus.sh [ directory ]
+#  checkalllitmus.sh [ directory ]
 #
 # The LINUX_HERD_OPTIONS environment variable may be used to specify
 # arguments to herd, whose default is defined by the checklitmus.sh script.
diff --git a/tools/memory-model/scripts/checklitmus.sh 
b/tools/memory-model/scripts/checklitmus.sh
old mode 100644
new mode 100755
index e2e477472844..bf12a75c0719
--- a/tools/memory-model/scripts/checklitmus.sh
+++ b/tools/memory-model/scripts/checklitmus.sh
@@ -8,7 +8,7 @@
 # with ".out" appended.
 #
 # Usage:
-#  sh checklitmus.sh file.litmus
+#  checklitmus.sh file.litmus
 #
 # The LINUX_HERD_OPTIONS environment variable may be used to specify
 # arguments to herd, which default to "-conf linux-kernel.cfg".  Thus,


[tip:locking/core] tools/memory-model: Make scripts executable

2018-07-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  24675bb554f9e3d0e2bd61f6a7d8da50d224c8ff
Gitweb: https://git.kernel.org/tip/24675bb554f9e3d0e2bd61f6a7d8da50d224c8ff
Author: Paul E. McKenney 
AuthorDate: Mon, 16 Jul 2018 11:05:58 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:17 +0200

tools/memory-model: Make scripts executable

This commit makes the scripts executable to avoid the need for everyone
to do so manually in their archive.

Signed-off-by: Paul E. McKenney 
Acked-by: Akira Yokosawa 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20180716180605.16115-7-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/scripts/checkalllitmus.sh | 2 +-
 tools/memory-model/scripts/checklitmus.sh| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/scripts/checkalllitmus.sh 
b/tools/memory-model/scripts/checkalllitmus.sh
old mode 100644
new mode 100755
index af0aa15ab84e..ca528f9a24d4
--- a/tools/memory-model/scripts/checkalllitmus.sh
+++ b/tools/memory-model/scripts/checkalllitmus.sh
@@ -9,7 +9,7 @@
 # appended.
 #
 # Usage:
-#  sh checkalllitmus.sh [ directory ]
+#  checkalllitmus.sh [ directory ]
 #
 # The LINUX_HERD_OPTIONS environment variable may be used to specify
 # arguments to herd, whose default is defined by the checklitmus.sh script.
diff --git a/tools/memory-model/scripts/checklitmus.sh 
b/tools/memory-model/scripts/checklitmus.sh
old mode 100644
new mode 100755
index e2e477472844..bf12a75c0719
--- a/tools/memory-model/scripts/checklitmus.sh
+++ b/tools/memory-model/scripts/checklitmus.sh
@@ -8,7 +8,7 @@
 # with ".out" appended.
 #
 # Usage:
-#  sh checklitmus.sh file.litmus
+#  checklitmus.sh file.litmus
 #
 # The LINUX_HERD_OPTIONS environment variable may be used to specify
 # arguments to herd, which default to "-conf linux-kernel.cfg".  Thus,


[tip:locking/core] tools/memory-model: Fix ISA2+pooncelock+pooncelock+pombonce name

2018-07-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  acb6c96c52ac0da6bb464173ad2cf5ada9049ad4
Gitweb: https://git.kernel.org/tip/acb6c96c52ac0da6bb464173ad2cf5ada9049ad4
Author: Paul E. McKenney 
AuthorDate: Mon, 16 Jul 2018 11:05:53 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:29:31 +0200

tools/memory-model: Fix ISA2+pooncelock+pooncelock+pombonce name

The names on the first line of the litmus tests are arbitrary,
but the convention is that they be the filename without the trailing
".litmus".  This commit therefore removes the stray trailing ".litmus"
from ISA2+pooncelock+pooncelock+pombonce.litmus's name.

Reported-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20180716180605.16115-2-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 .../litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus 
b/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
index 7a39a0aaa976..0f749e419b34 100644
--- a/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
+++ b/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
@@ -1,4 +1,4 @@
-C ISA2+pooncelock+pooncelock+pombonce.litmus
+C ISA2+pooncelock+pooncelock+pombonce
 
 (*
  * Result: Sometimes


[tip:locking/core] tools/memory-model: Fix ISA2+pooncelock+pooncelock+pombonce name

2018-07-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  acb6c96c52ac0da6bb464173ad2cf5ada9049ad4
Gitweb: https://git.kernel.org/tip/acb6c96c52ac0da6bb464173ad2cf5ada9049ad4
Author: Paul E. McKenney 
AuthorDate: Mon, 16 Jul 2018 11:05:53 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:29:31 +0200

tools/memory-model: Fix ISA2+pooncelock+pooncelock+pombonce name

The names on the first line of the litmus tests are arbitrary,
but the convention is that they be the filename without the trailing
".litmus".  This commit therefore removes the stray trailing ".litmus"
from ISA2+pooncelock+pooncelock+pombonce.litmus's name.

Reported-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20180716180605.16115-2-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 .../litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus 
b/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
index 7a39a0aaa976..0f749e419b34 100644
--- a/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
+++ b/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus
@@ -1,4 +1,4 @@
-C ISA2+pooncelock+pooncelock+pombonce.litmus
+C ISA2+pooncelock+pooncelock+pombonce
 
 (*
  * Result: Sometimes


[tip:locking/core] tools/memory-model: Add litmus test for full multicopy atomicity

2018-07-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  b464818978d45cd4d78c8f13207891142c68bea9
Gitweb: https://git.kernel.org/tip/b464818978d45cd4d78c8f13207891142c68bea9
Author: Paul E. McKenney 
AuthorDate: Mon, 16 Jul 2018 11:05:52 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:29:29 +0200

tools/memory-model: Add litmus test for full multicopy atomicity

This commit adds a litmus test suggested by Alan Stern that is forbidden
on fully multicopy atomic systems, but allowed on other-multicopy and
on non-multicopy atomic systems.  For reference, s390 is fully multicopy
atomic, x86 and ARMv8 are other-multicopy atomic, and ARMv7 and powerpc
are non-multicopy atomic.

Suggested-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Acked-by: Andrea Parri 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20180716180605.16115-1-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/README |  9 ++
 .../litmus-tests/SB+rfionceonce-poonceonces.litmus | 32 ++
 2 files changed, 41 insertions(+)

diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index 17eb9a8c222d..00140aaf58b7 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -111,6 +111,15 @@ SB+mbonceonces.litmus
 SB+poonceonces.litmus
As above, but without the smp_mb() invocations.
 
+SB+rfionceonce-poonceonces.litmus
+   This litmus test demonstrates that LKMM is not fully multicopy
+   atomic.  (Neither is it other multicopy atomic.)  This litmus test
+   also demonstrates the "locations" debugging aid, which designates
+   additional registers and locations to be printed out in the dump
+   of final states in the herd7 output.  Without the "locations"
+   statement, only those registers and locations mentioned in the
+   "exists" clause will be printed.
+
 S+poonceonces.litmus
As below, but without the smp_wmb() and acquire load.
 
diff --git a/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus 
b/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus
new file mode 100644
index ..04a16603660b
--- /dev/null
+++ b/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus
@@ -0,0 +1,32 @@
+C SB+rfionceonce-poonceonces
+
+(*
+ * Result: Sometimes
+ *
+ * This litmus test demonstrates that LKMM is not fully multicopy atomic.
+ *)
+
+{}
+
+P0(int *x, int *y)
+{
+   int r1;
+   int r2;
+
+   WRITE_ONCE(*x, 1);
+   r1 = READ_ONCE(*x);
+   r2 = READ_ONCE(*y);
+}
+
+P1(int *x, int *y)
+{
+   int r3;
+   int r4;
+
+   WRITE_ONCE(*y, 1);
+   r3 = READ_ONCE(*y);
+   r4 = READ_ONCE(*x);
+}
+
+locations [0:r1; 1:r3; x; y] (* Debug aid: Print things not in "exists". *)
+exists (0:r2=0 /\ 1:r4=0)


[tip:locking/core] tools/memory-model: Add litmus test for full multicopy atomicity

2018-07-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  b464818978d45cd4d78c8f13207891142c68bea9
Gitweb: https://git.kernel.org/tip/b464818978d45cd4d78c8f13207891142c68bea9
Author: Paul E. McKenney 
AuthorDate: Mon, 16 Jul 2018 11:05:52 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:29:29 +0200

tools/memory-model: Add litmus test for full multicopy atomicity

This commit adds a litmus test suggested by Alan Stern that is forbidden
on fully multicopy atomic systems, but allowed on other-multicopy and
on non-multicopy atomic systems.  For reference, s390 is fully multicopy
atomic, x86 and ARMv8 are other-multicopy atomic, and ARMv7 and powerpc
are non-multicopy atomic.

Suggested-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Acked-by: Andrea Parri 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: http://lkml.kernel.org/r/20180716180605.16115-1-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/README |  9 ++
 .../litmus-tests/SB+rfionceonce-poonceonces.litmus | 32 ++
 2 files changed, 41 insertions(+)

diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index 17eb9a8c222d..00140aaf58b7 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -111,6 +111,15 @@ SB+mbonceonces.litmus
 SB+poonceonces.litmus
As above, but without the smp_mb() invocations.
 
+SB+rfionceonce-poonceonces.litmus
+   This litmus test demonstrates that LKMM is not fully multicopy
+   atomic.  (Neither is it other multicopy atomic.)  This litmus test
+   also demonstrates the "locations" debugging aid, which designates
+   additional registers and locations to be printed out in the dump
+   of final states in the herd7 output.  Without the "locations"
+   statement, only those registers and locations mentioned in the
+   "exists" clause will be printed.
+
 S+poonceonces.litmus
As below, but without the smp_wmb() and acquire load.
 
diff --git a/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus 
b/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus
new file mode 100644
index ..04a16603660b
--- /dev/null
+++ b/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus
@@ -0,0 +1,32 @@
+C SB+rfionceonce-poonceonces
+
+(*
+ * Result: Sometimes
+ *
+ * This litmus test demonstrates that LKMM is not fully multicopy atomic.
+ *)
+
+{}
+
+P0(int *x, int *y)
+{
+   int r1;
+   int r2;
+
+   WRITE_ONCE(*x, 1);
+   r1 = READ_ONCE(*x);
+   r2 = READ_ONCE(*y);
+}
+
+P1(int *x, int *y)
+{
+   int r3;
+   int r4;
+
+   WRITE_ONCE(*y, 1);
+   r3 = READ_ONCE(*y);
+   r4 = READ_ONCE(*x);
+}
+
+locations [0:r1; 1:r3; x; y] (* Debug aid: Print things not in "exists". *)
+exists (0:r2=0 /\ 1:r4=0)


[tip:locking/core] tools/memory-model: Flag "cumulativity" and "propagation" tests

2018-05-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  1bd3742043fa44dd0ec25770abdcdfe1f6e8681e
Gitweb: https://git.kernel.org/tip/1bd3742043fa44dd0ec25770abdcdfe1f6e8681e
Author: Paul E. McKenney 
AuthorDate: Mon, 14 May 2018 16:33:49 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:17 +0200

tools/memory-model: Flag "cumulativity" and "propagation" tests

This commit flags WRC+pooncerelease+rmbonceonce+Once.litmus
as being forbidden by smp_store_release() A-cumulativity and
IRIW+mbonceonces+OnceOnce.litmus as being forbidden by the LKMM
propagation rule.

Suggested-by: Andrea Parri 
Reported-by: Paolo Bonzini 
[ paulmck: Updated wording as suggested by Alan Stern. ]
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-11-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus | 2 +-
 tools/memory-model/litmus-tests/README   | 9 ++---
 .../litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus   | 4 +++-
 3 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus 
b/tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus
index 50d5db9ea983..98a3716efa37 100644
--- a/tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus
+++ b/tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus
@@ -7,7 +7,7 @@ C IRIW+mbonceonces+OnceOnce
  * between each pairs of reads.  In other words, is smp_mb() sufficient to
  * cause two different reading processes to agree on the order of a pair
  * of writes, where each write is to a different variable by a different
- * process?
+ * process?  This litmus test exercises LKMM's "propagation" rule.
  *)
 
 {}
diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index 6919909bbd0f..17eb9a8c222d 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -23,7 +23,8 @@ IRIW+mbonceonces+OnceOnce.litmus
between each pairs of reads.  In other words, is smp_mb()
sufficient to cause two different reading processes to agree on
the order of a pair of writes, where each write is to a different
-   variable by a different process?
+   variable by a different process?  This litmus test is forbidden
+   by LKMM's propagation rule.
 
 IRIW+poonceonces+OnceOnce.litmus
Test of independent reads from independent writes with nothing
@@ -119,8 +120,10 @@ S+wmbonceonce+poacquireonce.litmus
 
 WRC+poonceonces+Once.litmus
 WRC+pooncerelease+rmbonceonce+Once.litmus
-   These two are members of an extension of the MP litmus-test class
-   in which the first write is moved to a separate process.
+   These two are members of an extension of the MP litmus-test
+   class in which the first write is moved to a separate process.
+   The second is forbidden because smp_store_release() is
+   A-cumulative in LKMM.
 
 Z6.0+pooncelock+pooncelock+pombonce.litmus
Is the ordering provided by a spin_unlock() and a subsequent
diff --git 
a/tools/memory-model/litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus 
b/tools/memory-model/litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus
index 97fcbffde9a0..ad3448b941e6 100644
--- a/tools/memory-model/litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus
+++ b/tools/memory-model/litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus
@@ -5,7 +5,9 @@ C WRC+pooncerelease+rmbonceonce+Once
  *
  * This litmus test is an extension of the message-passing pattern, where
  * the first write is moved to a separate process.  Because it features
- * a release and a read memory barrier, it should be forbidden.
+ * a release and a read memory barrier, it should be forbidden.  More
+ * specifically, this litmus test is forbidden because smp_store_release()
+ * is A-cumulative in LKMM.
  *)
 
 {}


[tip:locking/core] tools/memory-model: Flag "cumulativity" and "propagation" tests

2018-05-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  1bd3742043fa44dd0ec25770abdcdfe1f6e8681e
Gitweb: https://git.kernel.org/tip/1bd3742043fa44dd0ec25770abdcdfe1f6e8681e
Author: Paul E. McKenney 
AuthorDate: Mon, 14 May 2018 16:33:49 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:17 +0200

tools/memory-model: Flag "cumulativity" and "propagation" tests

This commit flags WRC+pooncerelease+rmbonceonce+Once.litmus
as being forbidden by smp_store_release() A-cumulativity and
IRIW+mbonceonces+OnceOnce.litmus as being forbidden by the LKMM
propagation rule.

Suggested-by: Andrea Parri 
Reported-by: Paolo Bonzini 
[ paulmck: Updated wording as suggested by Alan Stern. ]
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-11-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus | 2 +-
 tools/memory-model/litmus-tests/README   | 9 ++---
 .../litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus   | 4 +++-
 3 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus 
b/tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus
index 50d5db9ea983..98a3716efa37 100644
--- a/tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus
+++ b/tools/memory-model/litmus-tests/IRIW+mbonceonces+OnceOnce.litmus
@@ -7,7 +7,7 @@ C IRIW+mbonceonces+OnceOnce
  * between each pairs of reads.  In other words, is smp_mb() sufficient to
  * cause two different reading processes to agree on the order of a pair
  * of writes, where each write is to a different variable by a different
- * process?
+ * process?  This litmus test exercises LKMM's "propagation" rule.
  *)
 
 {}
diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index 6919909bbd0f..17eb9a8c222d 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -23,7 +23,8 @@ IRIW+mbonceonces+OnceOnce.litmus
between each pairs of reads.  In other words, is smp_mb()
sufficient to cause two different reading processes to agree on
the order of a pair of writes, where each write is to a different
-   variable by a different process?
+   variable by a different process?  This litmus test is forbidden
+   by LKMM's propagation rule.
 
 IRIW+poonceonces+OnceOnce.litmus
Test of independent reads from independent writes with nothing
@@ -119,8 +120,10 @@ S+wmbonceonce+poacquireonce.litmus
 
 WRC+poonceonces+Once.litmus
 WRC+pooncerelease+rmbonceonce+Once.litmus
-   These two are members of an extension of the MP litmus-test class
-   in which the first write is moved to a separate process.
+   These two are members of an extension of the MP litmus-test
+   class in which the first write is moved to a separate process.
+   The second is forbidden because smp_store_release() is
+   A-cumulative in LKMM.
 
 Z6.0+pooncelock+pooncelock+pombonce.litmus
Is the ordering provided by a spin_unlock() and a subsequent
diff --git 
a/tools/memory-model/litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus 
b/tools/memory-model/litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus
index 97fcbffde9a0..ad3448b941e6 100644
--- a/tools/memory-model/litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus
+++ b/tools/memory-model/litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus
@@ -5,7 +5,9 @@ C WRC+pooncerelease+rmbonceonce+Once
  *
  * This litmus test is an extension of the message-passing pattern, where
  * the first write is moved to a separate process.  Because it features
- * a release and a read memory barrier, it should be forbidden.
+ * a release and a read memory barrier, it should be forbidden.  More
+ * specifically, this litmus test is forbidden because smp_store_release()
+ * is A-cumulative in LKMM.
  *)
 
 {}


[tip:locking/core] tools/memory-model: Add scripts to test memory model

2018-05-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  2fb6ae162f25a9c3bc45663c479a2b15fb69e768
Gitweb: https://git.kernel.org/tip/2fb6ae162f25a9c3bc45663c479a2b15fb69e768
Author: Paul E. McKenney 
AuthorDate: Mon, 14 May 2018 16:33:47 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:17 +0200

tools/memory-model: Add scripts to test memory model

This commit adds a pair of scripts that run the memory model on litmus
tests, checking that the verification result of each litmus test matches
the result flagged in the litmus test itself.  These scripts permit easier
checking of changes to the memory model against preconceived notions.

To run the scripts, go to the tools/memory-model directory and type
"scripts/checkalllitmus.sh".  If all is well, the last line printed will
be "All litmus tests verified as was expected."

Signed-off-by: Paul E. McKenney 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526340837-1-9-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/.gitignore   |  1 +
 tools/memory-model/scripts/checkalllitmus.sh | 73 +++
 tools/memory-model/scripts/checklitmus.sh| 86 
 3 files changed, 160 insertions(+)

diff --git a/tools/memory-model/litmus-tests/.gitignore 
b/tools/memory-model/litmus-tests/.gitignore
new file mode 100644
index ..6e2ddc54152f
--- /dev/null
+++ b/tools/memory-model/litmus-tests/.gitignore
@@ -0,0 +1 @@
+*.litmus.out
diff --git a/tools/memory-model/scripts/checkalllitmus.sh 
b/tools/memory-model/scripts/checkalllitmus.sh
new file mode 100644
index ..af0aa15ab84e
--- /dev/null
+++ b/tools/memory-model/scripts/checkalllitmus.sh
@@ -0,0 +1,73 @@
+#!/bin/sh
+#
+# Run herd tests on all .litmus files in the specified directory (which
+# defaults to litmus-tests) and check each file's result against a "Result:"
+# comment within that litmus test.  If the verification result does not
+# match that specified in the litmus test, this script prints an error
+# message prefixed with "^^^".  It also outputs verification results to
+# a file whose name is that of the specified litmus test, but with ".out"
+# appended.
+#
+# Usage:
+#  sh checkalllitmus.sh [ directory ]
+#
+# The LINUX_HERD_OPTIONS environment variable may be used to specify
+# arguments to herd, whose default is defined by the checklitmus.sh script.
+# Thus, one would normally run this in the directory containing the memory
+# model, specifying the pathname of the litmus test to check.
+#
+# This script makes no attempt to run the litmus tests concurrently.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, you can access it online at
+# http://www.gnu.org/licenses/gpl-2.0.html.
+#
+# Copyright IBM Corporation, 2018
+#
+# Author: Paul E. McKenney 
+
+litmusdir=${1-litmus-tests}
+if test -d "$litmusdir" -a -r "$litmusdir" -a -x "$litmusdir"
+then
+   :
+else
+   echo ' --- ' error: $litmusdir is not an accessible directory
+   exit 255
+fi
+
+# Find the checklitmus script.  If it is not where we expect it, then
+# assume that the caller has the PATH environment variable set
+# appropriately.
+if test -x scripts/checklitmus.sh
+then
+   clscript=scripts/checklitmus.sh
+else
+   clscript=checklitmus.sh
+fi
+
+# Run the script on all the litmus tests in the specified directory
+ret=0
+for i in litmus-tests/*.litmus
+do
+   if ! $clscript $i
+   then
+   ret=1
+   fi
+done
+if test "$ret" -ne 0
+then
+   echo " ^^^ VERIFICATION MISMATCHES"
+else
+   echo All litmus tests verified as was expected.
+fi
+exit $ret
diff --git a/tools/memory-model/scripts/checklitmus.sh 
b/tools/memory-model/scripts/checklitmus.sh
new file mode 100644
index ..e2e477472844
--- /dev/null
+++ b/tools/memory-model/scripts/checklitmus.sh
@@ -0,0 +1,86 @@
+#!/bin/sh
+#
+# Run a 

[tip:locking/core] tools/memory-model: Add scripts to test memory model

2018-05-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  2fb6ae162f25a9c3bc45663c479a2b15fb69e768
Gitweb: https://git.kernel.org/tip/2fb6ae162f25a9c3bc45663c479a2b15fb69e768
Author: Paul E. McKenney 
AuthorDate: Mon, 14 May 2018 16:33:47 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:17 +0200

tools/memory-model: Add scripts to test memory model

This commit adds a pair of scripts that run the memory model on litmus
tests, checking that the verification result of each litmus test matches
the result flagged in the litmus test itself.  These scripts permit easier
checking of changes to the memory model against preconceived notions.

To run the scripts, go to the tools/memory-model directory and type
"scripts/checkalllitmus.sh".  If all is well, the last line printed will
be "All litmus tests verified as was expected."

Signed-off-by: Paul E. McKenney 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526340837-1-9-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/.gitignore   |  1 +
 tools/memory-model/scripts/checkalllitmus.sh | 73 +++
 tools/memory-model/scripts/checklitmus.sh| 86 
 3 files changed, 160 insertions(+)

diff --git a/tools/memory-model/litmus-tests/.gitignore 
b/tools/memory-model/litmus-tests/.gitignore
new file mode 100644
index ..6e2ddc54152f
--- /dev/null
+++ b/tools/memory-model/litmus-tests/.gitignore
@@ -0,0 +1 @@
+*.litmus.out
diff --git a/tools/memory-model/scripts/checkalllitmus.sh 
b/tools/memory-model/scripts/checkalllitmus.sh
new file mode 100644
index ..af0aa15ab84e
--- /dev/null
+++ b/tools/memory-model/scripts/checkalllitmus.sh
@@ -0,0 +1,73 @@
+#!/bin/sh
+#
+# Run herd tests on all .litmus files in the specified directory (which
+# defaults to litmus-tests) and check each file's result against a "Result:"
+# comment within that litmus test.  If the verification result does not
+# match that specified in the litmus test, this script prints an error
+# message prefixed with "^^^".  It also outputs verification results to
+# a file whose name is that of the specified litmus test, but with ".out"
+# appended.
+#
+# Usage:
+#  sh checkalllitmus.sh [ directory ]
+#
+# The LINUX_HERD_OPTIONS environment variable may be used to specify
+# arguments to herd, whose default is defined by the checklitmus.sh script.
+# Thus, one would normally run this in the directory containing the memory
+# model, specifying the pathname of the litmus test to check.
+#
+# This script makes no attempt to run the litmus tests concurrently.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, you can access it online at
+# http://www.gnu.org/licenses/gpl-2.0.html.
+#
+# Copyright IBM Corporation, 2018
+#
+# Author: Paul E. McKenney 
+
+litmusdir=${1-litmus-tests}
+if test -d "$litmusdir" -a -r "$litmusdir" -a -x "$litmusdir"
+then
+   :
+else
+   echo ' --- ' error: $litmusdir is not an accessible directory
+   exit 255
+fi
+
+# Find the checklitmus script.  If it is not where we expect it, then
+# assume that the caller has the PATH environment variable set
+# appropriately.
+if test -x scripts/checklitmus.sh
+then
+   clscript=scripts/checklitmus.sh
+else
+   clscript=checklitmus.sh
+fi
+
+# Run the script on all the litmus tests in the specified directory
+ret=0
+for i in litmus-tests/*.litmus
+do
+   if ! $clscript $i
+   then
+   ret=1
+   fi
+done
+if test "$ret" -ne 0
+then
+   echo " ^^^ VERIFICATION MISMATCHES"
+else
+   echo All litmus tests verified as was expected.
+fi
+exit $ret
diff --git a/tools/memory-model/scripts/checklitmus.sh 
b/tools/memory-model/scripts/checklitmus.sh
new file mode 100644
index ..e2e477472844
--- /dev/null
+++ b/tools/memory-model/scripts/checklitmus.sh
@@ -0,0 +1,86 @@
+#!/bin/sh
+#
+# Run a herd test and check the result against a "Result:" comment within
+# the litmus test.  If the verification result does not match that specified
+# in the litmus test, this script prints an error message prefixed with
+# "^^^" and exits with a 

[tip:locking/core] tools/memory-order: Update the cheat-sheet to show that smp_mb__after_atomic() orders later RMW operations

2018-05-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  bfd403bb3617e17a272e1189e5c76253052c22b8
Gitweb: https://git.kernel.org/tip/bfd403bb3617e17a272e1189e5c76253052c22b8
Author: Paul E. McKenney 
AuthorDate: Mon, 14 May 2018 16:33:44 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:16 +0200

tools/memory-order: Update the cheat-sheet to show that smp_mb__after_atomic() 
orders later RMW operations

The current cheat sheet does not claim that smp_mb__after_atomic()
orders later RMW atomic operations, which it must, at least against
earlier RMW atomic operations and whatever precedes them.

This commit therefore adds the needed "Y".

Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-6-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/cheatsheet.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/memory-model/Documentation/cheatsheet.txt 
b/tools/memory-model/Documentation/cheatsheet.txt
index 46fe79afc737..33ba98d72b16 100644
--- a/tools/memory-model/Documentation/cheatsheet.txt
+++ b/tools/memory-model/Documentation/cheatsheet.txt
@@ -14,7 +14,7 @@ smp_wmb()  YW   Y 
  YW
 smp_mb() & synchronize_rcu()  CPY  YYY  Y   Y   YY
 Successful full non-void RMW  CP Y  Y  YY Y  Y  Y   Y   YY   Y
 smp_mb__before_atomic()   CPY  YYa  a   a   aY
-smp_mb__after_atomic()CPa  aYY  Y   Y   Y
+smp_mb__after_atomic()CPa  aYY  Y   Y   YY
 
 
 Key:   C:  Ordering is cumulative


[tip:locking/core] tools/memory-order: Update the cheat-sheet to show that smp_mb__after_atomic() orders later RMW operations

2018-05-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  bfd403bb3617e17a272e1189e5c76253052c22b8
Gitweb: https://git.kernel.org/tip/bfd403bb3617e17a272e1189e5c76253052c22b8
Author: Paul E. McKenney 
AuthorDate: Mon, 14 May 2018 16:33:44 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:16 +0200

tools/memory-order: Update the cheat-sheet to show that smp_mb__after_atomic() 
orders later RMW operations

The current cheat sheet does not claim that smp_mb__after_atomic()
orders later RMW atomic operations, which it must, at least against
earlier RMW atomic operations and whatever precedes them.

This commit therefore adds the needed "Y".

Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-6-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/cheatsheet.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/memory-model/Documentation/cheatsheet.txt 
b/tools/memory-model/Documentation/cheatsheet.txt
index 46fe79afc737..33ba98d72b16 100644
--- a/tools/memory-model/Documentation/cheatsheet.txt
+++ b/tools/memory-model/Documentation/cheatsheet.txt
@@ -14,7 +14,7 @@ smp_wmb()  YW   Y 
  YW
 smp_mb() & synchronize_rcu()  CPY  YYY  Y   Y   YY
 Successful full non-void RMW  CP Y  Y  YY Y  Y  Y   Y   YY   Y
 smp_mb__before_atomic()   CPY  YYa  a   a   aY
-smp_mb__after_atomic()CPa  aYY  Y   Y   Y
+smp_mb__after_atomic()CPa  aYY  Y   Y   YY
 
 
 Key:   C:  Ordering is cumulative


[tip:locking/core] tools/memory-order: Improve key for SELF and SV

2018-05-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  35bb6ee6790600d29c598ebbf262359341f34e38
Gitweb: https://git.kernel.org/tip/35bb6ee6790600d29c598ebbf262359341f34e38
Author: Paul E. McKenney 
AuthorDate: Mon, 14 May 2018 16:33:43 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:16 +0200

tools/memory-order: Improve key for SELF and SV

The key for "SELF" was missing completely and the key for "SV" was
a bit obtuse.  This commit therefore adds a key for "SELF" and improves
the one for "SV".

Reported-by: Paolo Bonzini 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-5-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/cheatsheet.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/memory-model/Documentation/cheatsheet.txt 
b/tools/memory-model/Documentation/cheatsheet.txt
index c0eafdaddfa4..46fe79afc737 100644
--- a/tools/memory-model/Documentation/cheatsheet.txt
+++ b/tools/memory-model/Documentation/cheatsheet.txt
@@ -26,4 +26,5 @@ Key:  C:  Ordering is cumulative
DR: Dependent read (address dependency)
DW: Dependent write (address, data, or control dependency)
RMW:Atomic read-modify-write operation
-   SV  Same-variable access
+   SELF:   Orders self, as opposed to accesses before and/or after
+   SV: Orders later accesses to the same variable


[tip:locking/core] tools/memory-order: Improve key for SELF and SV

2018-05-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  35bb6ee6790600d29c598ebbf262359341f34e38
Gitweb: https://git.kernel.org/tip/35bb6ee6790600d29c598ebbf262359341f34e38
Author: Paul E. McKenney 
AuthorDate: Mon, 14 May 2018 16:33:43 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:16 +0200

tools/memory-order: Improve key for SELF and SV

The key for "SELF" was missing completely and the key for "SV" was
a bit obtuse.  This commit therefore adds a key for "SELF" and improves
the one for "SV".

Reported-by: Paolo Bonzini 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-5-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/cheatsheet.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/memory-model/Documentation/cheatsheet.txt 
b/tools/memory-model/Documentation/cheatsheet.txt
index c0eafdaddfa4..46fe79afc737 100644
--- a/tools/memory-model/Documentation/cheatsheet.txt
+++ b/tools/memory-model/Documentation/cheatsheet.txt
@@ -26,4 +26,5 @@ Key:  C:  Ordering is cumulative
DR: Dependent read (address dependency)
DW: Dependent write (address, data, or control dependency)
RMW:Atomic read-modify-write operation
-   SV  Same-variable access
+   SELF:   Orders self, as opposed to accesses before and/or after
+   SV: Orders later accesses to the same variable


[tip:locking/core] tools/memory-model: Add documentation of new litmus test

2018-03-10 Thread tip-bot for Paul E. McKenney
Commit-ID:  ff1fe5e079730f138c98b268ce2e8482a1d954b4
Gitweb: https://git.kernel.org/tip/ff1fe5e079730f138c98b268ce2e8482a1d954b4
Author: Paul E. McKenney 
AuthorDate: Wed, 7 Mar 2018 09:27:39 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 10 Mar 2018 10:22:23 +0100

tools/memory-model: Add documentation of new litmus test

The litmus-tests/README file lacks any mention of the new litmus test
ISA2+pooncelock+pooncelock+pombonce.litmus.  This commit therefore
adds a description of this test.

Reported-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1520443660-16858-3-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/README | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index dca7d823ad57..04096fb8b8d9 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -32,6 +32,12 @@ IRIW+poonceonces+OnceOnce.litmus
order of a pair of writes, where each write is to a different
variable by a different process?
 
+ISA2+pooncelock+pooncelock+pombonce.litmus
+   Tests whether the ordering provided by a lock-protected S
+   litmus test is visible to an external process whose accesses are
+   separated by smp_mb().  This addition of an external process to
+   S is otherwise known as ISA2.
+
 ISA2+poonceonces.litmus
As below, but with store-release replaced with WRITE_ONCE()
and load-acquire replaced with READ_ONCE().


[tip:locking/core] tools/memory-model: Add documentation of new litmus test

2018-03-10 Thread tip-bot for Paul E. McKenney
Commit-ID:  ff1fe5e079730f138c98b268ce2e8482a1d954b4
Gitweb: https://git.kernel.org/tip/ff1fe5e079730f138c98b268ce2e8482a1d954b4
Author: Paul E. McKenney 
AuthorDate: Wed, 7 Mar 2018 09:27:39 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 10 Mar 2018 10:22:23 +0100

tools/memory-model: Add documentation of new litmus test

The litmus-tests/README file lacks any mention of the new litmus test
ISA2+pooncelock+pooncelock+pombonce.litmus.  This commit therefore
adds a description of this test.

Reported-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1520443660-16858-3-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/README | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index dca7d823ad57..04096fb8b8d9 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -32,6 +32,12 @@ IRIW+poonceonces+OnceOnce.litmus
order of a pair of writes, where each write is to a different
variable by a different process?
 
+ISA2+pooncelock+pooncelock+pombonce.litmus
+   Tests whether the ordering provided by a lock-protected S
+   litmus test is visible to an external process whose accesses are
+   separated by smp_mb().  This addition of an external process to
+   S is otherwise known as ISA2.
+
 ISA2+poonceonces.litmus
As below, but with store-release replaced with WRITE_ONCE()
and load-acquire replaced with READ_ONCE().


[tip:locking/core] locking/memory-barriers: De-emphasize smp_read_barrier_depends() some more

2018-03-10 Thread tip-bot for Paul E. McKenney
Commit-ID:  f28f0868feb1e79b460131bac37230e303a5f6a4
Gitweb: https://git.kernel.org/tip/f28f0868feb1e79b460131bac37230e303a5f6a4
Author: Paul E. McKenney 
AuthorDate: Wed, 7 Mar 2018 09:27:37 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 10 Mar 2018 10:22:22 +0100

locking/memory-barriers: De-emphasize smp_read_barrier_depends() some more

This commit makes further changes to memory-barrier.txt to further
de-emphasize smp_read_barrier_depends(), but leaving some discussion
for historical purposes.

Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1520443660-16858-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 26 ++
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index da6525bdc3f5..6dafc8085acc 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -52,7 +52,7 @@ CONTENTS
 
  - Varieties of memory barrier.
  - What may not be assumed about memory barriers?
- - Data dependency barriers.
+ - Data dependency barriers (historical).
  - Control dependencies.
  - SMP barrier pairing.
  - Examples of memory barrier sequences.
@@ -554,8 +554,15 @@ There are certain things that the Linux kernel memory 
barriers do not guarantee:
Documentation/DMA-API.txt
 
 
-DATA DEPENDENCY BARRIERS
-
+DATA DEPENDENCY BARRIERS (HISTORICAL)
+-
+
+As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was
+added to READ_ONCE(), which means that about the only people who
+need to pay attention to this section are those working on DEC Alpha
+architecture-specific code and those working on READ_ONCE() itself.
+For those who need it, and for those who are interested in the history,
+here is the story of data-dependency barriers.
 
 The usage requirements of data dependency barriers are a little subtle, and
 it's not always obvious that they're needed.  To illustrate, consider the
@@ -2843,8 +2850,9 @@ as that committed on CPU 1.
 
 
 To intervene, we need to interpolate a data dependency barrier or a read
-barrier between the loads.  This will force the cache to commit its coherency
-queue before processing any further requests:
+barrier between the loads (which as of v4.15 is supplied unconditionally
+by the READ_ONCE() macro).  This will force the cache to commit its
+coherency queue before processing any further requests:
 
CPU 1   CPU 2   COMMENT
=== === ===
@@ -2873,8 +2881,8 @@ Other CPUs may also have split caches, but must 
coordinate between the various
 cachelets for normal memory accesses.  The semantics of the Alpha removes the
 need for hardware coordination in the absence of memory barriers, which
 permitted Alpha to sport higher CPU clock rates back in the day.  However,
-please note that smp_read_barrier_depends() should not be used except in
-Alpha arch-specific code and within the READ_ONCE() macro.
+please note that (again, as of v4.15) smp_read_barrier_depends() should not
+be used except in Alpha arch-specific code and within the READ_ONCE() macro.
 
 
 CACHE COHERENCY VS DMA
@@ -3039,7 +3047,9 @@ the data dependency barrier really becomes necessary as 
this synchronises both
 caches with the memory coherence system, thus making it seem like pointer
 changes vs new data occur in the right order.
 
-The Alpha defines the Linux kernel's memory barrier model.
+The Alpha defines the Linux kernel's memory model, although as of v4.15
+the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE()
+greatly reduced Alpha's impact on the memory model.
 
 See the subsection on "Cache Coherency" above.
 


[tip:locking/core] locking/memory-barriers: De-emphasize smp_read_barrier_depends() some more

2018-03-10 Thread tip-bot for Paul E. McKenney
Commit-ID:  f28f0868feb1e79b460131bac37230e303a5f6a4
Gitweb: https://git.kernel.org/tip/f28f0868feb1e79b460131bac37230e303a5f6a4
Author: Paul E. McKenney 
AuthorDate: Wed, 7 Mar 2018 09:27:37 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 10 Mar 2018 10:22:22 +0100

locking/memory-barriers: De-emphasize smp_read_barrier_depends() some more

This commit makes further changes to memory-barrier.txt to further
de-emphasize smp_read_barrier_depends(), but leaving some discussion
for historical purposes.

Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1520443660-16858-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 26 ++
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index da6525bdc3f5..6dafc8085acc 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -52,7 +52,7 @@ CONTENTS
 
  - Varieties of memory barrier.
  - What may not be assumed about memory barriers?
- - Data dependency barriers.
+ - Data dependency barriers (historical).
  - Control dependencies.
  - SMP barrier pairing.
  - Examples of memory barrier sequences.
@@ -554,8 +554,15 @@ There are certain things that the Linux kernel memory 
barriers do not guarantee:
Documentation/DMA-API.txt
 
 
-DATA DEPENDENCY BARRIERS
-
+DATA DEPENDENCY BARRIERS (HISTORICAL)
+-
+
+As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was
+added to READ_ONCE(), which means that about the only people who
+need to pay attention to this section are those working on DEC Alpha
+architecture-specific code and those working on READ_ONCE() itself.
+For those who need it, and for those who are interested in the history,
+here is the story of data-dependency barriers.
 
 The usage requirements of data dependency barriers are a little subtle, and
 it's not always obvious that they're needed.  To illustrate, consider the
@@ -2843,8 +2850,9 @@ as that committed on CPU 1.
 
 
 To intervene, we need to interpolate a data dependency barrier or a read
-barrier between the loads.  This will force the cache to commit its coherency
-queue before processing any further requests:
+barrier between the loads (which as of v4.15 is supplied unconditionally
+by the READ_ONCE() macro).  This will force the cache to commit its
+coherency queue before processing any further requests:
 
CPU 1   CPU 2   COMMENT
=== === ===
@@ -2873,8 +2881,8 @@ Other CPUs may also have split caches, but must 
coordinate between the various
 cachelets for normal memory accesses.  The semantics of the Alpha removes the
 need for hardware coordination in the absence of memory barriers, which
 permitted Alpha to sport higher CPU clock rates back in the day.  However,
-please note that smp_read_barrier_depends() should not be used except in
-Alpha arch-specific code and within the READ_ONCE() macro.
+please note that (again, as of v4.15) smp_read_barrier_depends() should not
+be used except in Alpha arch-specific code and within the READ_ONCE() macro.
 
 
 CACHE COHERENCY VS DMA
@@ -3039,7 +3047,9 @@ the data dependency barrier really becomes necessary as 
this synchronises both
 caches with the memory coherence system, thus making it seem like pointer
 changes vs new data occur in the right order.
 
-The Alpha defines the Linux kernel's memory barrier model.
+The Alpha defines the Linux kernel's memory model, although as of v4.15
+the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE()
+greatly reduced Alpha's impact on the memory model.
 
 See the subsection on "Cache Coherency" above.
 


[tip:locking/core] tools/memory-model: Remove mention of docker/gentoo image

2018-03-10 Thread tip-bot for Paul E. McKenney
Commit-ID:  d095c12c53c7b941ad4ea96dc229a08296b37d2e
Gitweb: https://git.kernel.org/tip/d095c12c53c7b941ad4ea96dc229a08296b37d2e
Author: Paul E. McKenney 
AuthorDate: Wed, 7 Mar 2018 09:27:38 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 10 Mar 2018 10:22:23 +0100

tools/memory-model: Remove mention of docker/gentoo image

Because the docker and gentoo images haven't been updated in quite some
time, they are likely to provide more confusion than help.  This commit
therefore removes mention of them from the README file.

Reported-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1520443660-16858-2-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/README | 15 ---
 1 file changed, 15 deletions(-)

diff --git a/tools/memory-model/README b/tools/memory-model/README
index ea950c566ffd..0b3a5f3c9ccd 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -27,21 +27,6 @@ separately:
 
 See "herdtools7/INSTALL.md" for installation instructions.
 
-Alternatively, Abhishek Bhardwaj has kindly provided a Docker image
-of these tools at "abhishek40/memory-model".  Abhishek suggests the
-following commands to install and use this image:
-
-  - Users should install Docker for their distribution.
-  - docker run -itd abhishek40/memory-model
-  - docker attach 
-
-Gentoo users might wish to make use of Patrick McLean's package:
-
-  https://gitweb.gentoo.org/repo/gentoo.git/tree/dev-util/herdtools7
-
-These packages may not be up-to-date with respect to the GitHub
-repository.
-
 
 ==
 BASIC USAGE: HERD7


[tip:locking/core] tools/memory-model: Remove mention of docker/gentoo image

2018-03-10 Thread tip-bot for Paul E. McKenney
Commit-ID:  d095c12c53c7b941ad4ea96dc229a08296b37d2e
Gitweb: https://git.kernel.org/tip/d095c12c53c7b941ad4ea96dc229a08296b37d2e
Author: Paul E. McKenney 
AuthorDate: Wed, 7 Mar 2018 09:27:38 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 10 Mar 2018 10:22:23 +0100

tools/memory-model: Remove mention of docker/gentoo image

Because the docker and gentoo images haven't been updated in quite some
time, they are likely to provide more confusion than help.  This commit
therefore removes mention of them from the README file.

Reported-by: Alan Stern 
Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1520443660-16858-2-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/README | 15 ---
 1 file changed, 15 deletions(-)

diff --git a/tools/memory-model/README b/tools/memory-model/README
index ea950c566ffd..0b3a5f3c9ccd 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -27,21 +27,6 @@ separately:
 
 See "herdtools7/INSTALL.md" for installation instructions.
 
-Alternatively, Abhishek Bhardwaj has kindly provided a Docker image
-of these tools at "abhishek40/memory-model".  Abhishek suggests the
-following commands to install and use this image:
-
-  - Users should install Docker for their distribution.
-  - docker run -itd abhishek40/memory-model
-  - docker attach 
-
-Gentoo users might wish to make use of Patrick McLean's package:
-
-  https://gitweb.gentoo.org/repo/gentoo.git/tree/dev-util/herdtools7
-
-These packages may not be up-to-date with respect to the GitHub
-repository.
-
 
 ==
 BASIC USAGE: HERD7


[tip:locking/core] tools/memory-model: Convert underscores to hyphens

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  cac79a39f200ef73ae7fc8a429ce2859ebb118d9
Gitweb: https://git.kernel.org/tip/cac79a39f200ef73ae7fc8a429ce2859ebb118d9
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:11 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:15 +0100

tools/memory-model: Convert underscores to hyphens

Typical cat-language code uses hyphens for word separators in
identifiers, but several LKMM identifiers use underscores instead.
This commit therefore converts underscores to hyphens in the .bell-
and .cat-file identifiers corresponding to smp_mb__before_atomic(),
smp_mb__after_atomic(), and smp_mb__after_spinlock().

Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-11-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.bell | 6 +++---
 tools/memory-model/linux-kernel.cat  | 6 +++---
 tools/memory-model/linux-kernel.def  | 6 +++---
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index b984bbd..18885ad 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -28,9 +28,9 @@ enum Barriers = 'wmb (*smp_wmb*) ||
'rcu-lock (*rcu_read_lock*)  ||
'rcu-unlock (*rcu_read_unlock*) ||
'sync-rcu (*synchronize_rcu*) ||
-   'before_atomic (*smp_mb__before_atomic*) ||
-   'after_atomic (*smp_mb__after_atomic*) ||
-   'after_spinlock (*smp_mb__after_spinlock*)
+   'before-atomic (*smp_mb__before_atomic*) ||
+   'after-atomic (*smp_mb__after_atomic*) ||
+   'after-spinlock (*smp_mb__after_spinlock*)
 instructions F[Barriers]
 
 (* Compute matching pairs of nested Rcu-lock and Rcu-unlock *)
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index babe2b3..f0d27f8 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -29,9 +29,9 @@ let rb-dep = [R] ; fencerel(Rb_dep) ; [R]
 let rmb = [R \ Noreturn] ; fencerel(Rmb) ; [R \ Noreturn]
 let wmb = [W] ; fencerel(Wmb) ; [W]
 let mb = ([M] ; fencerel(Mb) ; [M]) |
-   ([M] ; fencerel(Before_atomic) ; [RMW] ; po? ; [M]) |
-   ([M] ; po? ; [RMW] ; fencerel(After_atomic) ; [M]) |
-   ([M] ; po? ; [LKW] ; fencerel(After_spinlock) ; [M])
+   ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
+   ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
+   ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M])
 let gp = po ; [Sync-rcu] ; po?
 
 let strong-fence = mb | gp
diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index a397387..f5a1eb0 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -21,9 +21,9 @@ smp_mb() { __fence{mb} ; }
 smp_rmb() { __fence{rmb} ; }
 smp_wmb() { __fence{wmb} ; }
 smp_read_barrier_depends() { __fence{rb_dep}; }
-smp_mb__before_atomic() { __fence{before_atomic} ; }
-smp_mb__after_atomic() { __fence{after_atomic} ; }
-smp_mb__after_spinlock() { __fence{after_spinlock} ; }
+smp_mb__before_atomic() { __fence{before-atomic} ; }
+smp_mb__after_atomic() { __fence{after-atomic} ; }
+smp_mb__after_spinlock() { __fence{after-spinlock} ; }
 
 // Exchange
 xchg(X,V)  __xchg{mb}(X,V)


[tip:locking/core] tools/memory-model: Convert underscores to hyphens

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  cac79a39f200ef73ae7fc8a429ce2859ebb118d9
Gitweb: https://git.kernel.org/tip/cac79a39f200ef73ae7fc8a429ce2859ebb118d9
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:11 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:15 +0100

tools/memory-model: Convert underscores to hyphens

Typical cat-language code uses hyphens for word separators in
identifiers, but several LKMM identifiers use underscores instead.
This commit therefore converts underscores to hyphens in the .bell-
and .cat-file identifiers corresponding to smp_mb__before_atomic(),
smp_mb__after_atomic(), and smp_mb__after_spinlock().

Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-11-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.bell | 6 +++---
 tools/memory-model/linux-kernel.cat  | 6 +++---
 tools/memory-model/linux-kernel.def  | 6 +++---
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index b984bbd..18885ad 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -28,9 +28,9 @@ enum Barriers = 'wmb (*smp_wmb*) ||
'rcu-lock (*rcu_read_lock*)  ||
'rcu-unlock (*rcu_read_unlock*) ||
'sync-rcu (*synchronize_rcu*) ||
-   'before_atomic (*smp_mb__before_atomic*) ||
-   'after_atomic (*smp_mb__after_atomic*) ||
-   'after_spinlock (*smp_mb__after_spinlock*)
+   'before-atomic (*smp_mb__before_atomic*) ||
+   'after-atomic (*smp_mb__after_atomic*) ||
+   'after-spinlock (*smp_mb__after_spinlock*)
 instructions F[Barriers]
 
 (* Compute matching pairs of nested Rcu-lock and Rcu-unlock *)
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index babe2b3..f0d27f8 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -29,9 +29,9 @@ let rb-dep = [R] ; fencerel(Rb_dep) ; [R]
 let rmb = [R \ Noreturn] ; fencerel(Rmb) ; [R \ Noreturn]
 let wmb = [W] ; fencerel(Wmb) ; [W]
 let mb = ([M] ; fencerel(Mb) ; [M]) |
-   ([M] ; fencerel(Before_atomic) ; [RMW] ; po? ; [M]) |
-   ([M] ; po? ; [RMW] ; fencerel(After_atomic) ; [M]) |
-   ([M] ; po? ; [LKW] ; fencerel(After_spinlock) ; [M])
+   ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
+   ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
+   ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M])
 let gp = po ; [Sync-rcu] ; po?
 
 let strong-fence = mb | gp
diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index a397387..f5a1eb0 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -21,9 +21,9 @@ smp_mb() { __fence{mb} ; }
 smp_rmb() { __fence{rmb} ; }
 smp_wmb() { __fence{wmb} ; }
 smp_read_barrier_depends() { __fence{rb_dep}; }
-smp_mb__before_atomic() { __fence{before_atomic} ; }
-smp_mb__after_atomic() { __fence{after_atomic} ; }
-smp_mb__after_spinlock() { __fence{after_spinlock} ; }
+smp_mb__before_atomic() { __fence{before-atomic} ; }
+smp_mb__after_atomic() { __fence{after-atomic} ; }
+smp_mb__after_spinlock() { __fence{after-spinlock} ; }
 
 // Exchange
 xchg(X,V)  __xchg{mb}(X,V)


[tip:locking/core] tools/memory-model: Add required herd7 version to README file

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  8f7f2fbd00898deaf01e05a00095411811befd64
Gitweb: https://git.kernel.org/tip/8f7f2fbd00898deaf01e05a00095411811befd64
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:09 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:15 +0100

tools/memory-model: Add required herd7 version to README file

LKMM and the herd7 tool are co-evolving, and out-of-date herd7 tools
produce inaccurate results, often with no obvious error messages.  This
commit therefore adds the required herd7 version to the LKMM README file.

Longer term, it would be good if .cat files could specify the required
version in a manner allowing herd7 to produce clear diagnostics.

Suggested-by: Akira Yokosawa 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-9-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/README | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/memory-model/README b/tools/memory-model/README
index 91414a4..ea950c5 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -20,7 +20,8 @@ that litmus test to be exercised within the Linux kernel.
 REQUIREMENTS
 
 
-The "herd7" and "klitmus7" tools must be downloaded separately:
+Version 7.48 of the "herd7" and "klitmus7" tools must be downloaded
+separately:
 
   https://github.com/herd/herdtools7
 


[tip:locking/core] tools/memory-model: Add required herd7 version to README file

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  8f7f2fbd00898deaf01e05a00095411811befd64
Gitweb: https://git.kernel.org/tip/8f7f2fbd00898deaf01e05a00095411811befd64
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:09 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:15 +0100

tools/memory-model: Add required herd7 version to README file

LKMM and the herd7 tool are co-evolving, and out-of-date herd7 tools
produce inaccurate results, often with no obvious error messages.  This
commit therefore adds the required herd7 version to the LKMM README file.

Longer term, it would be good if .cat files could specify the required
version in a manner allowing herd7 to produce clear diagnostics.

Suggested-by: Akira Yokosawa 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-9-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/README | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/memory-model/README b/tools/memory-model/README
index 91414a4..ea950c5 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -20,7 +20,8 @@ that litmus test to be exercised within the Linux kernel.
 REQUIREMENTS
 
 
-The "herd7" and "klitmus7" tools must be downloaded separately:
+Version 7.48 of the "herd7" and "klitmus7" tools must be downloaded
+separately:
 
   https://github.com/herd/herdtools7
 


[tip:locking/core] MAINTAINERS: Add Akira Yokosawa as an LKMM reviewer

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  65b65f8e8b941785b0a08cf24ffd7084c8df327c
Gitweb: https://git.kernel.org/tip/65b65f8e8b941785b0a08cf24ffd7084c8df327c
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:06 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:14 +0100

MAINTAINERS: Add Akira Yokosawa as an LKMM reviewer

Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-6-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 42da350..1dd9cc2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8159,6 +8159,7 @@ M:David Howells 
 M: Jade Alglave 
 M: Luc Maranget 
 M: "Paul E. McKenney" 
+R: Akira Yokosawa 
 L: linux-kernel@vger.kernel.org
 S: Supported
 T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git


[tip:locking/core] MAINTAINERS: Add Akira Yokosawa as an LKMM reviewer

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  65b65f8e8b941785b0a08cf24ffd7084c8df327c
Gitweb: https://git.kernel.org/tip/65b65f8e8b941785b0a08cf24ffd7084c8df327c
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:06 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:14 +0100

MAINTAINERS: Add Akira Yokosawa as an LKMM reviewer

Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-6-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 42da350..1dd9cc2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8159,6 +8159,7 @@ M:David Howells 
 M: Jade Alglave 
 M: Luc Maranget 
 M: "Paul E. McKenney" 
+R: Akira Yokosawa 
 L: linux-kernel@vger.kernel.org
 S: Supported
 T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git


[tip:locking/core] README: Fix a couple of punctuation errors

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  62155147048f6c811b82cbb53bee246aee083774
Gitweb: https://git.kernel.org/tip/62155147048f6c811b82cbb53bee246aee083774
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:05 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:14 +0100

README: Fix a couple of punctuation errors

Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-5-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/README | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index 9a3bb59..dca7d823 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -23,14 +23,14 @@ IRIW+mbonceonces+OnceOnce.litmus
between each pairs of reads.  In other words, is smp_mb()
sufficient to cause two different reading processes to agree on
the order of a pair of writes, where each write is to a different
-   variable by a different process.
+   variable by a different process?
 
 IRIW+poonceonces+OnceOnce.litmus
Test of independent reads from independent writes with nothing
between each pairs of reads.  In other words, is anything at all
needed to cause two different reading processes to agree on the
order of a pair of writes, where each write is to a different
-   variable by a different process.
+   variable by a different process?
 
 ISA2+poonceonces.litmus
As below, but with store-release replaced with WRITE_ONCE()


[tip:locking/core] README: Fix a couple of punctuation errors

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  62155147048f6c811b82cbb53bee246aee083774
Gitweb: https://git.kernel.org/tip/62155147048f6c811b82cbb53bee246aee083774
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:05 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:14 +0100

README: Fix a couple of punctuation errors

Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-5-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/litmus-tests/README | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/litmus-tests/README 
b/tools/memory-model/litmus-tests/README
index 9a3bb59..dca7d823 100644
--- a/tools/memory-model/litmus-tests/README
+++ b/tools/memory-model/litmus-tests/README
@@ -23,14 +23,14 @@ IRIW+mbonceonces+OnceOnce.litmus
between each pairs of reads.  In other words, is smp_mb()
sufficient to cause two different reading processes to agree on
the order of a pair of writes, where each write is to a different
-   variable by a different process.
+   variable by a different process?
 
 IRIW+poonceonces+OnceOnce.litmus
Test of independent reads from independent writes with nothing
between each pairs of reads.  In other words, is anything at all
needed to cause two different reading processes to agree on the
order of a pair of writes, where each write is to a different
-   variable by a different process.
+   variable by a different process?
 
 ISA2+poonceonces.litmus
As below, but with store-release replaced with WRITE_ONCE()


[tip:locking/core] EXP litmus_tests: Add comments explaining tests' purposes

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  8f32543b61d7daeddb5b64c80b5ad5f05cc97722
Gitweb: https://git.kernel.org/tip/8f32543b61d7daeddb5b64c80b5ad5f05cc97722
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:04 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:13 +0100

EXP litmus_tests: Add comments explaining tests' purposes

This commit adds comments to the litmus tests summarizing what these
tests are intended to demonstrate.

[ paulmck: Apply Andrea's and Alan's feedback. ]
Suggested-by: Ingo Molnar 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-4-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 .../memory-model/litmus-tests/CoRR+poonceonce+Once.litmus  |  7 +++
 .../memory-model/litmus-tests/CoRW+poonceonce+Once.litmus  |  7 +++
 .../memory-model/litmus-tests/CoWR+poonceonce+Once.litmus  |  7 +++
 tools/memory-model/litmus-tests/CoWW+poonceonce.litmus |  7 +++
 .../litmus-tests/IRIW+mbonceonces+OnceOnce.litmus  | 10 ++
 .../litmus-tests/IRIW+poonceonces+OnceOnce.litmus  | 10 ++
 tools/memory-model/litmus-tests/ISA2+poonceonces.litmus|  9 +
 ...SA2+pooncerelease+poacquirerelease+poacquireonce.litmus | 11 +++
 .../litmus-tests/LB+ctrlonceonce+mbonceonce.litmus | 11 +++
 .../litmus-tests/LB+poacquireonce+pooncerelease.litmus |  8 
 tools/memory-model/litmus-tests/LB+poonceonces.litmus  |  7 +++
 .../litmus-tests/MP+onceassign+derefonce.litmus| 11 ++-
 tools/memory-model/litmus-tests/MP+polocks.litmus  | 11 +++
 tools/memory-model/litmus-tests/MP+poonceonces.litmus  |  7 +++
 .../litmus-tests/MP+pooncerelease+poacquireonce.litmus |  8 
 tools/memory-model/litmus-tests/MP+porevlocks.litmus   | 11 +++
 .../litmus-tests/MP+wmbonceonce+rmbonceonce.litmus |  8 
 tools/memory-model/litmus-tests/R+mbonceonces.litmus   |  9 +
 tools/memory-model/litmus-tests/R+poonceonces.litmus   |  8 
 tools/memory-model/litmus-tests/S+poonceonces.litmus   |  9 +
 .../litmus-tests/S+wmbonceonce+poacquireonce.litmus|  7 +++
 tools/memory-model/litmus-tests/SB+mbonceonces.litmus  |  9 +
 tools/memory-model/litmus-tests/SB+poonceonces.litmus  |  8 
 .../memory-model/litmus-tests/WRC+poonceonces+Once.litmus  |  8 
 .../litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus |  8 
 .../Z6.0+pooncelock+poonceLock+pombonce.litmus |  9 +
 .../Z6.0+pooncelock+pooncelock+pombonce.litmus |  8 
 .../Z6.0+pooncerelease+poacquirerelease+mbonceonce.litmus  | 14 ++
 28 files changed, 246 insertions(+), 1 deletion(-)

diff --git a/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus 
b/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus
index 5b83d57..967f9f2 100644
--- a/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus
+++ b/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus
@@ -1,5 +1,12 @@
 C CoRR+poonceonce+Once
 
+(*
+ * Result: Never
+ *
+ * Test of read-read coherence, that is, whether or not two successive
+ * reads from the same variable are ordered.
+ *)
+
 {}
 
 P0(int *x)
diff --git a/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus 
b/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus
index fab91c1..4635739 100644
--- a/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus
+++ b/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus
@@ -1,5 +1,12 @@
 C CoRW+poonceonce+Once
 
+(*
+ * Result: Never
+ *
+ * Test of read-write coherence, that is, whether or not a read from
+ * a given variable and a later write to that same variable are ordered.
+ *)
+
 {}
 
 P0(int *x)
diff --git a/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus 
b/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus
index 6a35ec2..bb068c9 100644
--- a/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus
+++ b/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus
@@ -1,5 +1,12 @@
 C CoWR+poonceonce+Once
 
+(*
+ * Result: Never
+ *
+ * Test of write-read coherence, that is, whether or not a write to a
+ * given variable and a later read from that same variable are ordered.
+ *)
+
 {}
 
 P0(int *x)
diff --git a/tools/memory-model/litmus-tests/CoWW+poonceonce.litmus 

[tip:locking/core] EXP litmus_tests: Add comments explaining tests' purposes

2018-02-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  8f32543b61d7daeddb5b64c80b5ad5f05cc97722
Gitweb: https://git.kernel.org/tip/8f32543b61d7daeddb5b64c80b5ad5f05cc97722
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Feb 2018 15:25:04 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:13 +0100

EXP litmus_tests: Add comments explaining tests' purposes

This commit adds comments to the litmus tests summarizing what these
tests are intended to demonstrate.

[ paulmck: Apply Andrea's and Alan's feedback. ]
Suggested-by: Ingo Molnar 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-4-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 .../memory-model/litmus-tests/CoRR+poonceonce+Once.litmus  |  7 +++
 .../memory-model/litmus-tests/CoRW+poonceonce+Once.litmus  |  7 +++
 .../memory-model/litmus-tests/CoWR+poonceonce+Once.litmus  |  7 +++
 tools/memory-model/litmus-tests/CoWW+poonceonce.litmus |  7 +++
 .../litmus-tests/IRIW+mbonceonces+OnceOnce.litmus  | 10 ++
 .../litmus-tests/IRIW+poonceonces+OnceOnce.litmus  | 10 ++
 tools/memory-model/litmus-tests/ISA2+poonceonces.litmus|  9 +
 ...SA2+pooncerelease+poacquirerelease+poacquireonce.litmus | 11 +++
 .../litmus-tests/LB+ctrlonceonce+mbonceonce.litmus | 11 +++
 .../litmus-tests/LB+poacquireonce+pooncerelease.litmus |  8 
 tools/memory-model/litmus-tests/LB+poonceonces.litmus  |  7 +++
 .../litmus-tests/MP+onceassign+derefonce.litmus| 11 ++-
 tools/memory-model/litmus-tests/MP+polocks.litmus  | 11 +++
 tools/memory-model/litmus-tests/MP+poonceonces.litmus  |  7 +++
 .../litmus-tests/MP+pooncerelease+poacquireonce.litmus |  8 
 tools/memory-model/litmus-tests/MP+porevlocks.litmus   | 11 +++
 .../litmus-tests/MP+wmbonceonce+rmbonceonce.litmus |  8 
 tools/memory-model/litmus-tests/R+mbonceonces.litmus   |  9 +
 tools/memory-model/litmus-tests/R+poonceonces.litmus   |  8 
 tools/memory-model/litmus-tests/S+poonceonces.litmus   |  9 +
 .../litmus-tests/S+wmbonceonce+poacquireonce.litmus|  7 +++
 tools/memory-model/litmus-tests/SB+mbonceonces.litmus  |  9 +
 tools/memory-model/litmus-tests/SB+poonceonces.litmus  |  8 
 .../memory-model/litmus-tests/WRC+poonceonces+Once.litmus  |  8 
 .../litmus-tests/WRC+pooncerelease+rmbonceonce+Once.litmus |  8 
 .../Z6.0+pooncelock+poonceLock+pombonce.litmus |  9 +
 .../Z6.0+pooncelock+pooncelock+pombonce.litmus |  8 
 .../Z6.0+pooncerelease+poacquirerelease+mbonceonce.litmus  | 14 ++
 28 files changed, 246 insertions(+), 1 deletion(-)

diff --git a/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus 
b/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus
index 5b83d57..967f9f2 100644
--- a/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus
+++ b/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus
@@ -1,5 +1,12 @@
 C CoRR+poonceonce+Once
 
+(*
+ * Result: Never
+ *
+ * Test of read-read coherence, that is, whether or not two successive
+ * reads from the same variable are ordered.
+ *)
+
 {}
 
 P0(int *x)
diff --git a/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus 
b/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus
index fab91c1..4635739 100644
--- a/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus
+++ b/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus
@@ -1,5 +1,12 @@
 C CoRW+poonceonce+Once
 
+(*
+ * Result: Never
+ *
+ * Test of read-write coherence, that is, whether or not a read from
+ * a given variable and a later write to that same variable are ordered.
+ *)
+
 {}
 
 P0(int *x)
diff --git a/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus 
b/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus
index 6a35ec2..bb068c9 100644
--- a/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus
+++ b/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus
@@ -1,5 +1,12 @@
 C CoWR+poonceonce+Once
 
+(*
+ * Result: Never
+ *
+ * Test of write-read coherence, that is, whether or not a write to a
+ * given variable and a later read from that same variable are ordered.
+ *)
+
 {}
 
 P0(int *x)
diff --git a/tools/memory-model/litmus-tests/CoWW+poonceonce.litmus 
b/tools/memory-model/litmus-tests/CoWW+poonceonce.litmus
index 32a96b8..0d9f0a9 100644
--- a/tools/memory-model/litmus-tests/CoWW+poonceonce.litmus
+++ 

[tip:sched/urgent] sched/isolation: Make CONFIG_NO_HZ_FULL select CONFIG_CPU_ISOLATION

2017-12-18 Thread tip-bot for Paul E. McKenney
Commit-ID:  bf29cb238dc0656e6564b6a94bb82e11d2129437
Gitweb: https://git.kernel.org/tip/bf29cb238dc0656e6564b6a94bb82e11d2129437
Author: Paul E. McKenney 
AuthorDate: Thu, 14 Dec 2017 19:18:25 +0100
Committer:  Ingo Molnar 
CommitDate: Mon, 18 Dec 2017 13:46:42 +0100

sched/isolation: Make CONFIG_NO_HZ_FULL select CONFIG_CPU_ISOLATION

CONFIG_NO_HZ_FULL doesn't make sense without CONFIG_CPU_ISOLATION. In
fact enabling the first without the second is a regression as nohz_full=
boot parameter gets silently ignored.

Besides this unnatural combination hangs RCU gp kthread when running
rcutorture for reasons that are not yet fully understood:

rcu_preempt kthread starved for 9974 jiffies! g4294967208
+c4294967207 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x402 ->cpu=0
rcu_preempt I 7464 8  2 0x8000
Call Trace:
__schedule+0x493/0x620
schedule+0x24/0x40
schedule_timeout+0x330/0x3b0
? preempt_count_sub+0xea/0x140
? collect_expired_timers+0xb0/0xb0
rcu_gp_kthread+0x6bf/0xef0

This commit therefore makes NO_HZ_FULL select CPU_ISOLATION, which
prevents all these bad behaviours.

Reported-by: kernel test robot 
Signed-off-by: Paul E. McKenney 
Signed-off-by: Frederic Weisbecker 
Cc: Chris Metcalf 
Cc: Christoph Lameter 
Cc: John Stultz 
Cc: Linus Torvalds 
Cc: Luiz Capitulino 
Cc: Mike Galbraith 
Cc: Peter Zijlstra 
Cc: Rik van Riel 
Cc: Thomas Gleixner 
Cc: Wanpeng Li 
Fixes: 5c4991e24c69 ("sched/isolation: Split out new CONFIG_CPU_ISOLATION=y 
config from CONFIG_NO_HZ_FULL")
Link: 
http://lkml.kernel.org/r/1513275507-29200-2-git-send-email-frede...@kernel.org
Signed-off-by: Ingo Molnar 
---
 kernel/time/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
index e776fc8..f6b5f19 100644
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -95,6 +95,7 @@ config NO_HZ_FULL
select RCU_NOCB_CPU
select VIRT_CPU_ACCOUNTING_GEN
select IRQ_WORK
+   select CPU_ISOLATION
help
 Adaptively try to shutdown the tick whenever possible, even when
 the CPU is running tasks. Typically this requires running a single


[tip:sched/urgent] sched/isolation: Make CONFIG_NO_HZ_FULL select CONFIG_CPU_ISOLATION

2017-12-18 Thread tip-bot for Paul E. McKenney
Commit-ID:  bf29cb238dc0656e6564b6a94bb82e11d2129437
Gitweb: https://git.kernel.org/tip/bf29cb238dc0656e6564b6a94bb82e11d2129437
Author: Paul E. McKenney 
AuthorDate: Thu, 14 Dec 2017 19:18:25 +0100
Committer:  Ingo Molnar 
CommitDate: Mon, 18 Dec 2017 13:46:42 +0100

sched/isolation: Make CONFIG_NO_HZ_FULL select CONFIG_CPU_ISOLATION

CONFIG_NO_HZ_FULL doesn't make sense without CONFIG_CPU_ISOLATION. In
fact enabling the first without the second is a regression as nohz_full=
boot parameter gets silently ignored.

Besides this unnatural combination hangs RCU gp kthread when running
rcutorture for reasons that are not yet fully understood:

rcu_preempt kthread starved for 9974 jiffies! g4294967208
+c4294967207 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x402 ->cpu=0
rcu_preempt I 7464 8  2 0x8000
Call Trace:
__schedule+0x493/0x620
schedule+0x24/0x40
schedule_timeout+0x330/0x3b0
? preempt_count_sub+0xea/0x140
? collect_expired_timers+0xb0/0xb0
rcu_gp_kthread+0x6bf/0xef0

This commit therefore makes NO_HZ_FULL select CPU_ISOLATION, which
prevents all these bad behaviours.

Reported-by: kernel test robot 
Signed-off-by: Paul E. McKenney 
Signed-off-by: Frederic Weisbecker 
Cc: Chris Metcalf 
Cc: Christoph Lameter 
Cc: John Stultz 
Cc: Linus Torvalds 
Cc: Luiz Capitulino 
Cc: Mike Galbraith 
Cc: Peter Zijlstra 
Cc: Rik van Riel 
Cc: Thomas Gleixner 
Cc: Wanpeng Li 
Fixes: 5c4991e24c69 ("sched/isolation: Split out new CONFIG_CPU_ISOLATION=y 
config from CONFIG_NO_HZ_FULL")
Link: 
http://lkml.kernel.org/r/1513275507-29200-2-git-send-email-frede...@kernel.org
Signed-off-by: Ingo Molnar 
---
 kernel/time/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
index e776fc8..f6b5f19 100644
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -95,6 +95,7 @@ config NO_HZ_FULL
select RCU_NOCB_CPU
select VIRT_CPU_ACCOUNTING_GEN
select IRQ_WORK
+   select CPU_ISOLATION
help
 Adaptively try to shutdown the tick whenever possible, even when
 the CPU is running tasks. Typically this requires running a single


[tip:locking/core] locking/atomics, mm: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE()

2017-10-25 Thread tip-bot for Paul E. McKenney
Commit-ID:  b03a0fe0c5e4b46dcd400d27395b124499554a71
Gitweb: https://git.kernel.org/tip/b03a0fe0c5e4b46dcd400d27395b124499554a71
Author: Paul E. McKenney 
AuthorDate: Mon, 23 Oct 2017 14:07:25 -0700
Committer:  Ingo Molnar 
CommitDate: Wed, 25 Oct 2017 11:01:06 +0200

locking/atomics, mm: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE()

For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't currently harmful.

However, for some features it is necessary to instrument reads and
writes separately, which is not possible with ACCESS_ONCE(). This
distinction is critical to correct operation.

It's possible to transform the bulk of kernel code using the Coccinelle
script below. However, this doesn't handle comments, leaving references
to ACCESS_ONCE() instances which have been removed. As a preparatory
step, this patch converts the mm code and comments to use
{READ,WRITE}_ONCE() consistently.


virtual patch

@ depends on patch @
expression E1, E2;
@@

- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)

@ depends on patch @
expression E;
@@

- ACCESS_ONCE(E)
+ READ_ONCE(E)


Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Acked-by: Mark Rutland 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: da...@davemloft.net
Cc: linux-a...@vger.kernel.org
Cc: m...@ellerman.id.au
Cc: sh...@kernel.org
Cc: snit...@redhat.com
Cc: thor.tha...@linux.intel.com
Cc: t...@kernel.org
Cc: v...@zeniv.linux.org.uk
Link: 
http://lkml.kernel.org/r/1508792849-3115-15-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 mm/memory.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index a728bed..cae514e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3891,9 +3891,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
/*
 * some architectures can have larger ptes than wordsize,
 * e.g.ppc44x-defconfig has CONFIG_PTE_64BIT=y and
-* CONFIG_32BIT=y, so READ_ONCE or ACCESS_ONCE cannot guarantee
-* atomic accesses.  The code below just needs a consistent
-* view for the ifs and we later double check anyway with the
+* CONFIG_32BIT=y, so READ_ONCE cannot guarantee atomic
+* accesses.  The code below just needs a consistent view
+* for the ifs and we later double check anyway with the
 * ptl lock held. So here a barrier will do.
 */
barrier();


[tip:locking/core] locking/atomics, mm: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE()

2017-10-25 Thread tip-bot for Paul E. McKenney
Commit-ID:  b03a0fe0c5e4b46dcd400d27395b124499554a71
Gitweb: https://git.kernel.org/tip/b03a0fe0c5e4b46dcd400d27395b124499554a71
Author: Paul E. McKenney 
AuthorDate: Mon, 23 Oct 2017 14:07:25 -0700
Committer:  Ingo Molnar 
CommitDate: Wed, 25 Oct 2017 11:01:06 +0200

locking/atomics, mm: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE()

For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't currently harmful.

However, for some features it is necessary to instrument reads and
writes separately, which is not possible with ACCESS_ONCE(). This
distinction is critical to correct operation.

It's possible to transform the bulk of kernel code using the Coccinelle
script below. However, this doesn't handle comments, leaving references
to ACCESS_ONCE() instances which have been removed. As a preparatory
step, this patch converts the mm code and comments to use
{READ,WRITE}_ONCE() consistently.


virtual patch

@ depends on patch @
expression E1, E2;
@@

- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)

@ depends on patch @
expression E;
@@

- ACCESS_ONCE(E)
+ READ_ONCE(E)


Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Acked-by: Mark Rutland 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: da...@davemloft.net
Cc: linux-a...@vger.kernel.org
Cc: m...@ellerman.id.au
Cc: sh...@kernel.org
Cc: snit...@redhat.com
Cc: thor.tha...@linux.intel.com
Cc: t...@kernel.org
Cc: v...@zeniv.linux.org.uk
Link: 
http://lkml.kernel.org/r/1508792849-3115-15-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 mm/memory.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index a728bed..cae514e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3891,9 +3891,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
/*
 * some architectures can have larger ptes than wordsize,
 * e.g.ppc44x-defconfig has CONFIG_PTE_64BIT=y and
-* CONFIG_32BIT=y, so READ_ONCE or ACCESS_ONCE cannot guarantee
-* atomic accesses.  The code below just needs a consistent
-* view for the ifs and we later double check anyway with the
+* CONFIG_32BIT=y, so READ_ONCE cannot guarantee atomic
+* accesses.  The code below just needs a consistent view
+* for the ifs and we later double check anyway with the
 * ptl lock held. So here a barrier will do.
 */
barrier();


[tip:locking/core] locking/atomics, doc/filesystems: Convert ACCESS_ONCE() references

2017-10-25 Thread tip-bot for Paul E. McKenney
Commit-ID:  3587679d93d0b0e4c31e5a2ad1dffdfcb77e8526
Gitweb: https://git.kernel.org/tip/3587679d93d0b0e4c31e5a2ad1dffdfcb77e8526
Author: Paul E. McKenney 
AuthorDate: Mon, 23 Oct 2017 14:07:24 -0700
Committer:  Ingo Molnar 
CommitDate: Wed, 25 Oct 2017 11:01:05 +0200

locking/atomics, doc/filesystems: Convert ACCESS_ONCE() references

For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't currently harmful.

However, for some features it is necessary to instrument reads and
writes separately, which is not possible with ACCESS_ONCE(). This
distinction is critical to correct operation.

It's possible to transform the bulk of kernel code using the Coccinelle
script below. However, this doesn't handle documentation, leaving
references to ACCESS_ONCE() instances which have been removed. As a
preparatory step, this patch converts the filesystems documentation to
use {READ,WRITE}_ONCE() consistently.


virtual patch

@ depends on patch @
expression E1, E2;
@@

- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)

@ depends on patch @
expression E;
@@

- ACCESS_ONCE(E)
+ READ_ONCE(E)


Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Acked-by: Mark Rutland 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: da...@davemloft.net
Cc: linux-a...@vger.kernel.org
Cc: m...@ellerman.id.au
Cc: sh...@kernel.org
Cc: snit...@redhat.com
Cc: thor.tha...@linux.intel.com
Cc: t...@kernel.org
Cc: v...@zeniv.linux.org.uk
Link: 
http://lkml.kernel.org/r/1508792849-3115-14-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/filesystems/path-lookup.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Documentation/filesystems/path-lookup.md 
b/Documentation/filesystems/path-lookup.md
index 1b39e08..1933ef7 100644
--- a/Documentation/filesystems/path-lookup.md
+++ b/Documentation/filesystems/path-lookup.md
@@ -826,9 +826,9 @@ If the filesystem may need to revalidate dcache entries, 
then
 *is* passed the dentry but does not have access to the `inode` or the
 `seq` number from the `nameidata`, so it needs to be extra careful
 when accessing fields in the dentry.  This "extra care" typically
-involves using `ACCESS_ONCE()` or the newer [`READ_ONCE()`] to access
-fields, and verifying the result is not NULL before using it.  This
-pattern can be see in `nfs_lookup_revalidate()`.
+involves using [`READ_ONCE()`] to access fields, and verifying the
+result is not NULL before using it.  This pattern can be seen in
+`nfs_lookup_revalidate()`.
 
 A pair of patterns
 --


[tip:locking/core] locking/atomics, doc/filesystems: Convert ACCESS_ONCE() references

2017-10-25 Thread tip-bot for Paul E. McKenney
Commit-ID:  3587679d93d0b0e4c31e5a2ad1dffdfcb77e8526
Gitweb: https://git.kernel.org/tip/3587679d93d0b0e4c31e5a2ad1dffdfcb77e8526
Author: Paul E. McKenney 
AuthorDate: Mon, 23 Oct 2017 14:07:24 -0700
Committer:  Ingo Molnar 
CommitDate: Wed, 25 Oct 2017 11:01:05 +0200

locking/atomics, doc/filesystems: Convert ACCESS_ONCE() references

For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't currently harmful.

However, for some features it is necessary to instrument reads and
writes separately, which is not possible with ACCESS_ONCE(). This
distinction is critical to correct operation.

It's possible to transform the bulk of kernel code using the Coccinelle
script below. However, this doesn't handle documentation, leaving
references to ACCESS_ONCE() instances which have been removed. As a
preparatory step, this patch converts the filesystems documentation to
use {READ,WRITE}_ONCE() consistently.


virtual patch

@ depends on patch @
expression E1, E2;
@@

- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)

@ depends on patch @
expression E;
@@

- ACCESS_ONCE(E)
+ READ_ONCE(E)


Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Acked-by: Mark Rutland 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: da...@davemloft.net
Cc: linux-a...@vger.kernel.org
Cc: m...@ellerman.id.au
Cc: sh...@kernel.org
Cc: snit...@redhat.com
Cc: thor.tha...@linux.intel.com
Cc: t...@kernel.org
Cc: v...@zeniv.linux.org.uk
Link: 
http://lkml.kernel.org/r/1508792849-3115-14-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/filesystems/path-lookup.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Documentation/filesystems/path-lookup.md 
b/Documentation/filesystems/path-lookup.md
index 1b39e08..1933ef7 100644
--- a/Documentation/filesystems/path-lookup.md
+++ b/Documentation/filesystems/path-lookup.md
@@ -826,9 +826,9 @@ If the filesystem may need to revalidate dcache entries, 
then
 *is* passed the dentry but does not have access to the `inode` or the
 `seq` number from the `nameidata`, so it needs to be extra careful
 when accessing fields in the dentry.  This "extra care" typically
-involves using `ACCESS_ONCE()` or the newer [`READ_ONCE()`] to access
-fields, and verifying the result is not NULL before using it.  This
-pattern can be see in `nfs_lookup_revalidate()`.
+involves using [`READ_ONCE()`] to access fields, and verifying the
+result is not NULL before using it.  This pattern can be seen in
+`nfs_lookup_revalidate()`.
 
 A pair of patterns
 --


[tip:core/rcu] rcu: Migrate callbacks earlier in the CPU-offline timeline

2017-08-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  a58163d8ca2c8d288ee9f95989712f98473a5ac2
Gitweb: http://git.kernel.org/tip/a58163d8ca2c8d288ee9f95989712f98473a5ac2
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Jun 2017 12:11:34 -0700
Committer:  Paul E. McKenney 
CommitDate: Tue, 25 Jul 2017 13:03:43 -0700

rcu: Migrate callbacks earlier in the CPU-offline timeline

RCU callbacks must be migrated away from an outgoing CPU, and this is
done near the end of the CPU-hotplug operation, after the outgoing CPU is
long gone.  Unfortunately, this means that other CPU-hotplug callbacks
can execute while the outgoing CPU's callbacks are still immobilized
on the long-gone CPU's callback lists.  If any of these CPU-hotplug
callbacks must wait, either directly or indirectly, for the invocation
of any of the immobilized RCU callbacks, the system will hang.

This commit avoids such hangs by migrating the callbacks away from the
outgoing CPU immediately upon its departure, shortly after the return
from __cpu_die() in takedown_cpu().  Thus, RCU is able to advance these
callbacks and invoke them, which allows all the after-the-fact CPU-hotplug
callbacks to wait on these RCU callbacks without risk of a hang.

While in the neighborhood, this commit also moves rcu_send_cbs_to_orphanage()
and rcu_adopt_orphan_cbs() under a pre-existing #ifdef to avoid including
dead code on the one hand and to avoid define-without-use warnings on the
other hand.

Reported-by: Jeffrey Hugo 
Link: 
http://lkml.kernel.org/r/db9c91f6-1b17-6136-84f0-03c3c2581...@codeaurora.org
Signed-off-by: Paul E. McKenney 
Cc: Thomas Gleixner 
Cc: Sebastian Andrzej Siewior 
Cc: Ingo Molnar 
Cc: Anna-Maria Gleixner 
Cc: Boris Ostrovsky 
Cc: Richard Weinberger 
---
 include/linux/rcupdate.h |   1 +
 kernel/cpu.c |   1 +
 kernel/rcu/tree.c| 209 +--
 3 files changed, 115 insertions(+), 96 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index f816fc7..cf307eb 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -110,6 +110,7 @@ void rcu_bh_qs(void);
 void rcu_check_callbacks(int user);
 void rcu_report_dead(unsigned int cpu);
 void rcu_cpu_starting(unsigned int cpu);
+void rcutree_migrate_callbacks(int cpu);
 
 #ifdef CONFIG_RCU_STALL_COMMON
 void rcu_sysrq_start(void);
diff --git a/kernel/cpu.c b/kernel/cpu.c
index eee0331..bfbd649 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -650,6 +650,7 @@ static int takedown_cpu(unsigned int cpu)
__cpu_die(cpu);
 
tick_cleanup_dead_cpu(cpu);
+   rcutree_migrate_callbacks(cpu);
return 0;
 }
 
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 51d4c3a..9bb5dff 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2563,85 +2563,6 @@ rcu_check_quiescent_state(struct rcu_state *rsp, struct 
rcu_data *rdp)
 }
 
 /*
- * Send the specified CPU's RCU callbacks to the orphanage.  The
- * specified CPU must be offline, and the caller must hold the
- * ->orphan_lock.
- */
-static void
-rcu_send_cbs_to_orphanage(int cpu, struct rcu_state *rsp,
- struct rcu_node *rnp, struct rcu_data *rdp)
-{
-   lockdep_assert_held(>orphan_lock);
-
-   /* No-CBs CPUs do not have orphanable callbacks. */
-   if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) || rcu_is_nocb_cpu(rdp->cpu))
-   return;
-
-   /*
-* Orphan the callbacks.  First adjust the counts.  This is safe
-* because _rcu_barrier() excludes CPU-hotplug operations, so it
-* cannot be running now.  Thus no memory barrier is required.
-*/
-   rdp->n_cbs_orphaned += rcu_segcblist_n_cbs(>cblist);
-   rcu_segcblist_extract_count(>cblist, >orphan_done);
-
-   /*
-* Next, move those callbacks still needing a grace period to
-* the orphanage, where some other CPU will pick them up.
-* Some of the callbacks might have gone partway through a grace
-* period, but that is too bad.  They get to start over because we
-* cannot assume that grace periods are synchronized across CPUs.
-*/
-   rcu_segcblist_extract_pend_cbs(>cblist, >orphan_pend);
-
-   /*
-* Then move the ready-to-invoke callbacks to the orphanage,
-* where some other CPU will pick them up.  These will not be
-* required to pass though another grace period: They are done.
-*/
-   rcu_segcblist_extract_done_cbs(>cblist, >orphan_done);
-
-   /* Finally, disallow further callbacks on this CPU.  */
-   rcu_segcblist_disable(>cblist);
-}
-
-/*
- * Adopt the RCU callbacks from the specified rcu_state structure's
- * orphanage.  The caller must hold the ->orphan_lock.
- */
-static void rcu_adopt_orphan_cbs(struct 

[tip:core/rcu] rcu: Migrate callbacks earlier in the CPU-offline timeline

2017-08-15 Thread tip-bot for Paul E. McKenney
Commit-ID:  a58163d8ca2c8d288ee9f95989712f98473a5ac2
Gitweb: http://git.kernel.org/tip/a58163d8ca2c8d288ee9f95989712f98473a5ac2
Author: Paul E. McKenney 
AuthorDate: Tue, 20 Jun 2017 12:11:34 -0700
Committer:  Paul E. McKenney 
CommitDate: Tue, 25 Jul 2017 13:03:43 -0700

rcu: Migrate callbacks earlier in the CPU-offline timeline

RCU callbacks must be migrated away from an outgoing CPU, and this is
done near the end of the CPU-hotplug operation, after the outgoing CPU is
long gone.  Unfortunately, this means that other CPU-hotplug callbacks
can execute while the outgoing CPU's callbacks are still immobilized
on the long-gone CPU's callback lists.  If any of these CPU-hotplug
callbacks must wait, either directly or indirectly, for the invocation
of any of the immobilized RCU callbacks, the system will hang.

This commit avoids such hangs by migrating the callbacks away from the
outgoing CPU immediately upon its departure, shortly after the return
from __cpu_die() in takedown_cpu().  Thus, RCU is able to advance these
callbacks and invoke them, which allows all the after-the-fact CPU-hotplug
callbacks to wait on these RCU callbacks without risk of a hang.

While in the neighborhood, this commit also moves rcu_send_cbs_to_orphanage()
and rcu_adopt_orphan_cbs() under a pre-existing #ifdef to avoid including
dead code on the one hand and to avoid define-without-use warnings on the
other hand.

Reported-by: Jeffrey Hugo 
Link: 
http://lkml.kernel.org/r/db9c91f6-1b17-6136-84f0-03c3c2581...@codeaurora.org
Signed-off-by: Paul E. McKenney 
Cc: Thomas Gleixner 
Cc: Sebastian Andrzej Siewior 
Cc: Ingo Molnar 
Cc: Anna-Maria Gleixner 
Cc: Boris Ostrovsky 
Cc: Richard Weinberger 
---
 include/linux/rcupdate.h |   1 +
 kernel/cpu.c |   1 +
 kernel/rcu/tree.c| 209 +--
 3 files changed, 115 insertions(+), 96 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index f816fc7..cf307eb 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -110,6 +110,7 @@ void rcu_bh_qs(void);
 void rcu_check_callbacks(int user);
 void rcu_report_dead(unsigned int cpu);
 void rcu_cpu_starting(unsigned int cpu);
+void rcutree_migrate_callbacks(int cpu);
 
 #ifdef CONFIG_RCU_STALL_COMMON
 void rcu_sysrq_start(void);
diff --git a/kernel/cpu.c b/kernel/cpu.c
index eee0331..bfbd649 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -650,6 +650,7 @@ static int takedown_cpu(unsigned int cpu)
__cpu_die(cpu);
 
tick_cleanup_dead_cpu(cpu);
+   rcutree_migrate_callbacks(cpu);
return 0;
 }
 
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 51d4c3a..9bb5dff 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2563,85 +2563,6 @@ rcu_check_quiescent_state(struct rcu_state *rsp, struct 
rcu_data *rdp)
 }
 
 /*
- * Send the specified CPU's RCU callbacks to the orphanage.  The
- * specified CPU must be offline, and the caller must hold the
- * ->orphan_lock.
- */
-static void
-rcu_send_cbs_to_orphanage(int cpu, struct rcu_state *rsp,
- struct rcu_node *rnp, struct rcu_data *rdp)
-{
-   lockdep_assert_held(>orphan_lock);
-
-   /* No-CBs CPUs do not have orphanable callbacks. */
-   if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) || rcu_is_nocb_cpu(rdp->cpu))
-   return;
-
-   /*
-* Orphan the callbacks.  First adjust the counts.  This is safe
-* because _rcu_barrier() excludes CPU-hotplug operations, so it
-* cannot be running now.  Thus no memory barrier is required.
-*/
-   rdp->n_cbs_orphaned += rcu_segcblist_n_cbs(>cblist);
-   rcu_segcblist_extract_count(>cblist, >orphan_done);
-
-   /*
-* Next, move those callbacks still needing a grace period to
-* the orphanage, where some other CPU will pick them up.
-* Some of the callbacks might have gone partway through a grace
-* period, but that is too bad.  They get to start over because we
-* cannot assume that grace periods are synchronized across CPUs.
-*/
-   rcu_segcblist_extract_pend_cbs(>cblist, >orphan_pend);
-
-   /*
-* Then move the ready-to-invoke callbacks to the orphanage,
-* where some other CPU will pick them up.  These will not be
-* required to pass though another grace period: They are done.
-*/
-   rcu_segcblist_extract_done_cbs(>cblist, >orphan_done);
-
-   /* Finally, disallow further callbacks on this CPU.  */
-   rcu_segcblist_disable(>cblist);
-}
-
-/*
- * Adopt the RCU callbacks from the specified rcu_state structure's
- * orphanage.  The caller must hold the ->orphan_lock.
- */
-static void rcu_adopt_orphan_cbs(struct rcu_state *rsp, unsigned long flags)
-{
-   struct rcu_data *rdp = raw_cpu_ptr(rsp->rda);
-
-   lockdep_assert_held(>orphan_lock);
-
-   /* No-CBs CPUs are handled specially. */
-   if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) ||
-  

[tip:core/rcu] srcu: Fix Kconfig botch when SRCU not selected

2017-04-24 Thread tip-bot for Paul E. McKenney
Commit-ID:  677df9d4615a2db6774cd0e8951bf7404b858b3b
Gitweb: http://git.kernel.org/tip/677df9d4615a2db6774cd0e8951bf7404b858b3b
Author: Paul E. McKenney 
AuthorDate: Sun, 23 Apr 2017 09:22:05 -0700
Committer:  Ingo Molnar 
CommitDate: Mon, 24 Apr 2017 08:14:48 +0200

srcu: Fix Kconfig botch when SRCU not selected

If the CONFIG_SRCU option is not selected, for example, when building
arch/tile allnoconfig, the following build errors appear:

kernel/rcu/tree.o: In function `srcu_online_cpu':
tree.c:(.text+0x4248): multiple definition of `srcu_online_cpu'
kernel/rcu/srcutree.o:srcutree.c:(.text+0x2120): first defined here
kernel/rcu/tree.o: In function `srcu_offline_cpu':
tree.c:(.text+0x4250): multiple definition of `srcu_offline_cpu'
kernel/rcu/srcutree.o:srcutree.c:(.text+0x2160): first defined here

The corresponding .config file shows CONFIG_TREE_SRCU=y, but no sign
of CONFIG_SRCU, which fatally confuses SRCU's #ifdefs, resulting in
the above errors.  The reason this occurs is the folowing line in
init/Kconfig's definition for TREE_SRCU:

default y if !TINY_RCU && !CLASSIC_SRCU

If CONFIG_CLASSIC_SRCU=n, as it will be in for allnoconfig, and if
CONFIG_SMP=y, then we will get CONFIG_TREE_SRCU=y but no CONFIG_SRCU,
as seen in the .config file, and which will result in the above errors.
This error did not show up during rcutorture testing because rcutorture
forces CONFIG_SRCU=y, as it must to prevent build errors in rcutorture.c.

This commit therefore conditions TREE_SRCU (and TINY_SRCU, while it is
at it) with SRCU, like this:

default y if SRCU && !TINY_RCU && !CLASSIC_SRCU

Reported-by: kbuild test robot 
Reported-by: Ingo Molnar 
Signed-off-by: Paul E. McKenney 
Link: http://lkml.kernel.org/r/20170423162205.gp3...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 init/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index 4119a44..fe72c12 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -545,13 +545,13 @@ config CLASSIC_SRCU
 
 config TINY_SRCU
bool
-   default y if TINY_RCU && !CLASSIC_SRCU
+   default y if SRCU && TINY_RCU && !CLASSIC_SRCU
help
  This option selects the single-CPU non-preemptible version of SRCU.
 
 config TREE_SRCU
bool
-   default y if !TINY_RCU && !CLASSIC_SRCU
+   default y if SRCU && !TINY_RCU && !CLASSIC_SRCU
help
  This option selects the full-fledged version of SRCU.
 


[tip:core/rcu] srcu: Fix Kconfig botch when SRCU not selected

2017-04-24 Thread tip-bot for Paul E. McKenney
Commit-ID:  677df9d4615a2db6774cd0e8951bf7404b858b3b
Gitweb: http://git.kernel.org/tip/677df9d4615a2db6774cd0e8951bf7404b858b3b
Author: Paul E. McKenney 
AuthorDate: Sun, 23 Apr 2017 09:22:05 -0700
Committer:  Ingo Molnar 
CommitDate: Mon, 24 Apr 2017 08:14:48 +0200

srcu: Fix Kconfig botch when SRCU not selected

If the CONFIG_SRCU option is not selected, for example, when building
arch/tile allnoconfig, the following build errors appear:

kernel/rcu/tree.o: In function `srcu_online_cpu':
tree.c:(.text+0x4248): multiple definition of `srcu_online_cpu'
kernel/rcu/srcutree.o:srcutree.c:(.text+0x2120): first defined here
kernel/rcu/tree.o: In function `srcu_offline_cpu':
tree.c:(.text+0x4250): multiple definition of `srcu_offline_cpu'
kernel/rcu/srcutree.o:srcutree.c:(.text+0x2160): first defined here

The corresponding .config file shows CONFIG_TREE_SRCU=y, but no sign
of CONFIG_SRCU, which fatally confuses SRCU's #ifdefs, resulting in
the above errors.  The reason this occurs is the folowing line in
init/Kconfig's definition for TREE_SRCU:

default y if !TINY_RCU && !CLASSIC_SRCU

If CONFIG_CLASSIC_SRCU=n, as it will be in for allnoconfig, and if
CONFIG_SMP=y, then we will get CONFIG_TREE_SRCU=y but no CONFIG_SRCU,
as seen in the .config file, and which will result in the above errors.
This error did not show up during rcutorture testing because rcutorture
forces CONFIG_SRCU=y, as it must to prevent build errors in rcutorture.c.

This commit therefore conditions TREE_SRCU (and TINY_SRCU, while it is
at it) with SRCU, like this:

default y if SRCU && !TINY_RCU && !CLASSIC_SRCU

Reported-by: kbuild test robot 
Reported-by: Ingo Molnar 
Signed-off-by: Paul E. McKenney 
Link: http://lkml.kernel.org/r/20170423162205.gp3...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 init/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index 4119a44..fe72c12 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -545,13 +545,13 @@ config CLASSIC_SRCU
 
 config TINY_SRCU
bool
-   default y if TINY_RCU && !CLASSIC_SRCU
+   default y if SRCU && TINY_RCU && !CLASSIC_SRCU
help
  This option selects the single-CPU non-preemptible version of SRCU.
 
 config TREE_SRCU
bool
-   default y if !TINY_RCU && !CLASSIC_SRCU
+   default y if SRCU && !TINY_RCU && !CLASSIC_SRCU
help
  This option selects the full-fledged version of SRCU.
 


[tip:locking/core] locking/Documentation: Clarify limited control-dependency scope

2016-06-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  ebff09a6ff164aec2b33bf1f9a488c45ac108413
Gitweb: http://git.kernel.org/tip/ebff09a6ff164aec2b33bf1f9a488c45ac108413
Author: Paul E. McKenney 
AuthorDate: Wed, 15 Jun 2016 16:08:17 -0700
Committer:  Ingo Molnar 
CommitDate: Fri, 17 Jun 2016 09:54:45 +0200

locking/Documentation: Clarify limited control-dependency scope

Nothing in the control-dependencies section of memory-barriers.txt
says that control dependencies don't extend beyond the end of the
if-statement containing the control dependency.  Worse yet, in many
situations, they do extend beyond that if-statement.  In particular,
the compiler cannot destroy the control dependency given proper use of
READ_ONCE() and WRITE_ONCE().  However, a weakly ordered system having
a conditional-move instruction provides the control-dependency guarantee
only to code within the scope of the if-statement itself.

This commit therefore adds words and an example demonstrating this
limitation of control dependencies.

Reported-by: Will Deacon 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: cor...@lwn.net
Cc: linux-a...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Link: http://lkml.kernel.org/r/20160615230817.ga18...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 41 +++
 1 file changed, 41 insertions(+)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 147ae8e..a4d0a99 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -806,6 +806,41 @@ out-guess your code.  More generally, although READ_ONCE() 
does force
 the compiler to actually emit code for a given load, it does not force
 the compiler to use the results.
 
+In addition, control dependencies apply only to the then-clause and
+else-clause of the if-statement in question.  In particular, it does
+not necessarily apply to code following the if-statement:
+
+   q = READ_ONCE(a);
+   if (q) {
+   WRITE_ONCE(b, p);
+   } else {
+   WRITE_ONCE(b, r);
+   }
+   WRITE_ONCE(c, 1);  /* BUG: No ordering against the read from "a". */
+
+It is tempting to argue that there in fact is ordering because the
+compiler cannot reorder volatile accesses and also cannot reorder
+the writes to "b" with the condition.  Unfortunately for this line
+of reasoning, the compiler might compile the two writes to "b" as
+conditional-move instructions, as in this fanciful pseudo-assembly
+language:
+
+   ld r1,a
+   ld r2,p
+   ld r3,r
+   cmp r1,$0
+   cmov,ne r4,r2
+   cmov,eq r4,r3
+   st r4,b
+   st $1,c
+
+A weakly ordered CPU would have no dependency of any sort between the load
+from "a" and the store to "c".  The control dependencies would extend
+only to the pair of cmov instructions and the store depending on them.
+In short, control dependencies apply only to the stores in the then-clause
+and else-clause of the if-statement in question (including functions
+invoked by those two clauses), not to code following that if-statement.
+
 Finally, control dependencies do -not- provide transitivity.  This is
 demonstrated by two related examples, with the initial values of
 x and y both being zero:
@@ -869,6 +904,12 @@ In summary:
   atomic{,64}_read() can help to preserve your control dependency.
   Please see the COMPILER BARRIER section for more information.
 
+  (*) Control dependencies apply only to the then-clause and else-clause
+  of the if-statement containing the control dependency, including
+  any functions that these two clauses call.  Control dependencies
+  do -not- apply to code following the if-statement containing the
+  control dependency.
+
   (*) Control dependencies pair normally with other types of barriers.
 
   (*) Control dependencies do -not- provide transitivity.  If you


[tip:locking/core] locking/Documentation: Clarify limited control-dependency scope

2016-06-17 Thread tip-bot for Paul E. McKenney
Commit-ID:  ebff09a6ff164aec2b33bf1f9a488c45ac108413
Gitweb: http://git.kernel.org/tip/ebff09a6ff164aec2b33bf1f9a488c45ac108413
Author: Paul E. McKenney 
AuthorDate: Wed, 15 Jun 2016 16:08:17 -0700
Committer:  Ingo Molnar 
CommitDate: Fri, 17 Jun 2016 09:54:45 +0200

locking/Documentation: Clarify limited control-dependency scope

Nothing in the control-dependencies section of memory-barriers.txt
says that control dependencies don't extend beyond the end of the
if-statement containing the control dependency.  Worse yet, in many
situations, they do extend beyond that if-statement.  In particular,
the compiler cannot destroy the control dependency given proper use of
READ_ONCE() and WRITE_ONCE().  However, a weakly ordered system having
a conditional-move instruction provides the control-dependency guarantee
only to code within the scope of the if-statement itself.

This commit therefore adds words and an example demonstrating this
limitation of control dependencies.

Reported-by: Will Deacon 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: cor...@lwn.net
Cc: linux-a...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Link: http://lkml.kernel.org/r/20160615230817.ga18...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 41 +++
 1 file changed, 41 insertions(+)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 147ae8e..a4d0a99 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -806,6 +806,41 @@ out-guess your code.  More generally, although READ_ONCE() 
does force
 the compiler to actually emit code for a given load, it does not force
 the compiler to use the results.
 
+In addition, control dependencies apply only to the then-clause and
+else-clause of the if-statement in question.  In particular, it does
+not necessarily apply to code following the if-statement:
+
+   q = READ_ONCE(a);
+   if (q) {
+   WRITE_ONCE(b, p);
+   } else {
+   WRITE_ONCE(b, r);
+   }
+   WRITE_ONCE(c, 1);  /* BUG: No ordering against the read from "a". */
+
+It is tempting to argue that there in fact is ordering because the
+compiler cannot reorder volatile accesses and also cannot reorder
+the writes to "b" with the condition.  Unfortunately for this line
+of reasoning, the compiler might compile the two writes to "b" as
+conditional-move instructions, as in this fanciful pseudo-assembly
+language:
+
+   ld r1,a
+   ld r2,p
+   ld r3,r
+   cmp r1,$0
+   cmov,ne r4,r2
+   cmov,eq r4,r3
+   st r4,b
+   st $1,c
+
+A weakly ordered CPU would have no dependency of any sort between the load
+from "a" and the store to "c".  The control dependencies would extend
+only to the pair of cmov instructions and the store depending on them.
+In short, control dependencies apply only to the stores in the then-clause
+and else-clause of the if-statement in question (including functions
+invoked by those two clauses), not to code following that if-statement.
+
 Finally, control dependencies do -not- provide transitivity.  This is
 demonstrated by two related examples, with the initial values of
 x and y both being zero:
@@ -869,6 +904,12 @@ In summary:
   atomic{,64}_read() can help to preserve your control dependency.
   Please see the COMPILER BARRIER section for more information.
 
+  (*) Control dependencies apply only to the then-clause and else-clause
+  of the if-statement containing the control dependency, including
+  any functions that these two clauses call.  Control dependencies
+  do -not- apply to code following the if-statement containing the
+  control dependency.
+
   (*) Control dependencies pair normally with other types of barriers.
 
   (*) Control dependencies do -not- provide transitivity.  If you


[tip:locking/core] lcoking/locktorture: Simplify the torture_runnable computation

2016-04-28 Thread tip-bot for Paul E. McKenney
Commit-ID:  5db4298133d99b3dfc60d6899ac9df169769c899
Gitweb: http://git.kernel.org/tip/5db4298133d99b3dfc60d6899ac9df169769c899
Author: Paul E. McKenney 
AuthorDate: Tue, 26 Apr 2016 10:22:08 -0700
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Apr 2016 10:57:51 +0200

lcoking/locktorture: Simplify the torture_runnable computation

This commit replaces an #ifdef with IS_ENABLED(), saving five lines.

Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: cor...@lwn.net
Cc: d...@stgolabs.net
Cc: dhowe...@redhat.com
Cc: linux-...@vger.kernel.org
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1461691328-5429-4-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 kernel/locking/locktorture.c | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index d066a50..f8c5af5 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -75,12 +75,7 @@ struct lock_stress_stats {
long n_lock_acquired;
 };
 
-#if defined(MODULE)
-#define LOCKTORTURE_RUNNABLE_INIT 1
-#else
-#define LOCKTORTURE_RUNNABLE_INIT 0
-#endif
-int torture_runnable = LOCKTORTURE_RUNNABLE_INIT;
+int torture_runnable = IS_ENABLED(MODULE);
 module_param(torture_runnable, int, 0444);
 MODULE_PARM_DESC(torture_runnable, "Start locktorture at module init");
 


[tip:locking/core] lcoking/locktorture: Simplify the torture_runnable computation

2016-04-28 Thread tip-bot for Paul E. McKenney
Commit-ID:  5db4298133d99b3dfc60d6899ac9df169769c899
Gitweb: http://git.kernel.org/tip/5db4298133d99b3dfc60d6899ac9df169769c899
Author: Paul E. McKenney 
AuthorDate: Tue, 26 Apr 2016 10:22:08 -0700
Committer:  Ingo Molnar 
CommitDate: Thu, 28 Apr 2016 10:57:51 +0200

lcoking/locktorture: Simplify the torture_runnable computation

This commit replaces an #ifdef with IS_ENABLED(), saving five lines.

Signed-off-by: Paul E. McKenney 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: cor...@lwn.net
Cc: d...@stgolabs.net
Cc: dhowe...@redhat.com
Cc: linux-...@vger.kernel.org
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1461691328-5429-4-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 kernel/locking/locktorture.c | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index d066a50..f8c5af5 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -75,12 +75,7 @@ struct lock_stress_stats {
long n_lock_acquired;
 };
 
-#if defined(MODULE)
-#define LOCKTORTURE_RUNNABLE_INIT 1
-#else
-#define LOCKTORTURE_RUNNABLE_INIT 0
-#endif
-int torture_runnable = LOCKTORTURE_RUNNABLE_INIT;
+int torture_runnable = IS_ENABLED(MODULE);
 module_param(torture_runnable, int, 0444);
 MODULE_PARM_DESC(torture_runnable, "Start locktorture at module init");
 


[tip:locking/core] locking/Documentation: Clarify relationship of barrier() to control dependencies

2016-04-13 Thread tip-bot for Paul E. McKenney
Commit-ID:  a5052657c164107032d521f0d9e92703d78845f2
Gitweb: http://git.kernel.org/tip/a5052657c164107032d521f0d9e92703d78845f2
Author: Paul E. McKenney 
AuthorDate: Tue, 12 Apr 2016 08:52:49 -0700
Committer:  Ingo Molnar 
CommitDate: Wed, 13 Apr 2016 08:52:21 +0200

locking/Documentation: Clarify relationship of barrier() to control dependencies

The current documentation claims that the compiler ignores barrier(),
which is not the case.  Instead, the compiler carefully pays attention
to barrier(), but in a creative way that still manages to destroy
the control dependency.  This commit sets the story straight.

Reported-by: Mathieu Desnoyers 
Signed-off-by: Paul E. McKenney 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: bobby.pr...@gmail.com
Cc: dhowe...@redhat.com
Cc: dipan...@in.ibm.com
Cc: dvh...@linux.intel.com
Cc: eduma...@google.com
Cc: fweis...@gmail.com
Cc: jiangshan...@gmail.com
Cc: j...@joshtriplett.org
Cc: o...@redhat.com
Cc: rost...@goodmis.org
Link: 
http://lkml.kernel.org/r/1460476375-27803-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 3729cbe..ec12890 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -813,9 +813,10 @@ In summary:
   the same variable, then those stores must be ordered, either by
   preceding both of them with smp_mb() or by using smp_store_release()
   to carry out the stores.  Please note that it is -not- sufficient
-  to use barrier() at beginning of each leg of the "if" statement,
-  as optimizing compilers do not necessarily respect barrier()
-  in this case.
+  to use barrier() at beginning of each leg of the "if" statement
+  because, as shown by the example above, optimizing compilers can
+  destroy the control dependency while respecting the letter of the
+  barrier() law.
 
   (*) Control dependencies require at least one run-time conditional
   between the prior load and the subsequent store, and this


[tip:locking/core] locking/Documentation: Clarify relationship of barrier() to control dependencies

2016-04-13 Thread tip-bot for Paul E. McKenney
Commit-ID:  a5052657c164107032d521f0d9e92703d78845f2
Gitweb: http://git.kernel.org/tip/a5052657c164107032d521f0d9e92703d78845f2
Author: Paul E. McKenney 
AuthorDate: Tue, 12 Apr 2016 08:52:49 -0700
Committer:  Ingo Molnar 
CommitDate: Wed, 13 Apr 2016 08:52:21 +0200

locking/Documentation: Clarify relationship of barrier() to control dependencies

The current documentation claims that the compiler ignores barrier(),
which is not the case.  Instead, the compiler carefully pays attention
to barrier(), but in a creative way that still manages to destroy
the control dependency.  This commit sets the story straight.

Reported-by: Mathieu Desnoyers 
Signed-off-by: Paul E. McKenney 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: bobby.pr...@gmail.com
Cc: dhowe...@redhat.com
Cc: dipan...@in.ibm.com
Cc: dvh...@linux.intel.com
Cc: eduma...@google.com
Cc: fweis...@gmail.com
Cc: jiangshan...@gmail.com
Cc: j...@joshtriplett.org
Cc: o...@redhat.com
Cc: rost...@goodmis.org
Link: 
http://lkml.kernel.org/r/1460476375-27803-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 3729cbe..ec12890 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -813,9 +813,10 @@ In summary:
   the same variable, then those stores must be ordered, either by
   preceding both of them with smp_mb() or by using smp_store_release()
   to carry out the stores.  Please note that it is -not- sufficient
-  to use barrier() at beginning of each leg of the "if" statement,
-  as optimizing compilers do not necessarily respect barrier()
-  in this case.
+  to use barrier() at beginning of each leg of the "if" statement
+  because, as shown by the example above, optimizing compilers can
+  destroy the control dependency while respecting the letter of the
+  barrier() law.
 
   (*) Control dependencies require at least one run-time conditional
   between the prior load and the subsequent store, and this


[tip:perf/urgent] perf: Disable IRQs across RCU RS CS that acquires scheduler lock

2015-11-09 Thread tip-bot for Paul E. McKenney
Commit-ID:  2fd59077755c44dbbd9b2fa89cf988235a3a6a2b
Gitweb: http://git.kernel.org/tip/2fd59077755c44dbbd9b2fa89cf988235a3a6a2b
Author: Paul E. McKenney 
AuthorDate: Wed, 4 Nov 2015 05:48:38 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 9 Nov 2015 16:13:11 +0100

perf: Disable IRQs across RCU RS CS that acquires scheduler lock

The perf_lock_task_context() function disables preemption across its
RCU read-side critical section because that critical section acquires
a scheduler lock.  If there was a preemption during that RCU read-side
critical section, the rcu_read_unlock() could attempt to acquire scheduler
locks, resulting in deadlock.

However, recent optimizations to expedited grace periods mean that IPI
handlers that execute during preemptible RCU read-side critical sections
can now cause the subsequent rcu_read_unlock() to acquire scheduler locks.
Disabling preemption does nothiing to prevent these IPI handlers from
executing, so these optimizations introduced a deadlock.  In theory,
this deadlock could be avoided by pulling all wakeups and printk()s out
from rnp->lock critical sections, but in practice this would re-introduce
some RCU CPU stall warning bugs.

Given that acquiring scheduler locks entails disabling interrupts, these
deadlocks can be avoided by disabling interrupts (instead of disabling
preemption) across any RCU read-side critical that acquires scheduler
locks and holds them across the rcu_read_unlock().  This commit therefore
makes this change for perf_lock_task_context().

Reported-by: Dave Jones 
Reported-by: Peter Zijlstra 
Signed-off-by: Paul E. McKenney 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Arnaldo Carvalho de Melo 
Cc: Jiri Olsa 
Cc: Linus Torvalds 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Link: http://lkml.kernel.org/r/20151104134838.gr29...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 kernel/events/core.c | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index ea02109..f8e5c44 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1050,13 +1050,13 @@ retry:
/*
 * One of the few rules of preemptible RCU is that one cannot do
 * rcu_read_unlock() while holding a scheduler (or nested) lock when
-* part of the read side critical section was preemptible -- see
+* part of the read side critical section was irqs-enabled -- see
 * rcu_read_unlock_special().
 *
 * Since ctx->lock nests under rq->lock we must ensure the entire read
-* side critical section is non-preemptible.
+* side critical section has interrupts disabled.
 */
-   preempt_disable();
+   local_irq_save(*flags);
rcu_read_lock();
ctx = rcu_dereference(task->perf_event_ctxp[ctxn]);
if (ctx) {
@@ -1070,21 +1070,22 @@ retry:
 * if so.  If we locked the right context, then it
 * can't get swapped on us any more.
 */
-   raw_spin_lock_irqsave(>lock, *flags);
+   raw_spin_lock(>lock);
if (ctx != rcu_dereference(task->perf_event_ctxp[ctxn])) {
-   raw_spin_unlock_irqrestore(>lock, *flags);
+   raw_spin_unlock(>lock);
rcu_read_unlock();
-   preempt_enable();
+   local_irq_restore(*flags);
goto retry;
}
 
if (!atomic_inc_not_zero(>refcount)) {
-   raw_spin_unlock_irqrestore(>lock, *flags);
+   raw_spin_unlock(>lock);
ctx = NULL;
}
}
rcu_read_unlock();
-   preempt_enable();
+   if (!ctx)
+   local_irq_restore(*flags);
return ctx;
 }
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:perf/urgent] perf: Disable IRQs across RCU RS CS that acquires scheduler lock

2015-11-09 Thread tip-bot for Paul E. McKenney
Commit-ID:  2fd59077755c44dbbd9b2fa89cf988235a3a6a2b
Gitweb: http://git.kernel.org/tip/2fd59077755c44dbbd9b2fa89cf988235a3a6a2b
Author: Paul E. McKenney 
AuthorDate: Wed, 4 Nov 2015 05:48:38 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 9 Nov 2015 16:13:11 +0100

perf: Disable IRQs across RCU RS CS that acquires scheduler lock

The perf_lock_task_context() function disables preemption across its
RCU read-side critical section because that critical section acquires
a scheduler lock.  If there was a preemption during that RCU read-side
critical section, the rcu_read_unlock() could attempt to acquire scheduler
locks, resulting in deadlock.

However, recent optimizations to expedited grace periods mean that IPI
handlers that execute during preemptible RCU read-side critical sections
can now cause the subsequent rcu_read_unlock() to acquire scheduler locks.
Disabling preemption does nothiing to prevent these IPI handlers from
executing, so these optimizations introduced a deadlock.  In theory,
this deadlock could be avoided by pulling all wakeups and printk()s out
from rnp->lock critical sections, but in practice this would re-introduce
some RCU CPU stall warning bugs.

Given that acquiring scheduler locks entails disabling interrupts, these
deadlocks can be avoided by disabling interrupts (instead of disabling
preemption) across any RCU read-side critical that acquires scheduler
locks and holds them across the rcu_read_unlock().  This commit therefore
makes this change for perf_lock_task_context().

Reported-by: Dave Jones 
Reported-by: Peter Zijlstra 
Signed-off-by: Paul E. McKenney 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Arnaldo Carvalho de Melo 
Cc: Jiri Olsa 
Cc: Linus Torvalds 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Link: http://lkml.kernel.org/r/20151104134838.gr29...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 kernel/events/core.c | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index ea02109..f8e5c44 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1050,13 +1050,13 @@ retry:
/*
 * One of the few rules of preemptible RCU is that one cannot do
 * rcu_read_unlock() while holding a scheduler (or nested) lock when
-* part of the read side critical section was preemptible -- see
+* part of the read side critical section was irqs-enabled -- see
 * rcu_read_unlock_special().
 *
 * Since ctx->lock nests under rq->lock we must ensure the entire read
-* side critical section is non-preemptible.
+* side critical section has interrupts disabled.
 */
-   preempt_disable();
+   local_irq_save(*flags);
rcu_read_lock();
ctx = rcu_dereference(task->perf_event_ctxp[ctxn]);
if (ctx) {
@@ -1070,21 +1070,22 @@ retry:
 * if so.  If we locked the right context, then it
 * can't get swapped on us any more.
 */
-   raw_spin_lock_irqsave(>lock, *flags);
+   raw_spin_lock(>lock);
if (ctx != rcu_dereference(task->perf_event_ctxp[ctxn])) {
-   raw_spin_unlock_irqrestore(>lock, *flags);
+   raw_spin_unlock(>lock);
rcu_read_unlock();
-   preempt_enable();
+   local_irq_restore(*flags);
goto retry;
}
 
if (!atomic_inc_not_zero(>refcount)) {
-   raw_spin_unlock_irqrestore(>lock, *flags);
+   raw_spin_unlock(>lock);
ctx = NULL;
}
}
rcu_read_unlock();
-   preempt_enable();
+   if (!ctx)
+   local_irq_restore(*flags);
return ctx;
 }
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] Documentation/memory-barriers.txt: Add needed ACCESS_ONCE() calls to memory-barriers.txt

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  2ecf810121c7eae34473b8fa108112036bc61127
Gitweb: http://git.kernel.org/tip/2ecf810121c7eae34473b8fa108112036bc61127
Author: Paul E. McKenney 
AuthorDate: Wed, 11 Dec 2013 13:59:04 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 16 Dec 2013 11:36:08 +0100

Documentation/memory-barriers.txt: Add needed ACCESS_ONCE() calls to 
memory-barriers.txt

The Documentation/memory-barriers.txt file was written before
the need for ACCESS_ONCE() was fully appreciated.  It therefore
contains no ACCESS_ONCE() calls, which can be a problem when
people lift examples from it.  This commit therefore adds
ACCESS_ONCE() calls.

Signed-off-by: Paul E. McKenney 
Reviewed-by: Josh Triplett 
Reviewed-by: Peter Zijlstra 
Cc: 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Link: 
http://lkml.kernel.org/r/1386799151-2219-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 206 +++---
 1 file changed, 126 insertions(+), 80 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 020cccd..1d06723 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -194,18 +194,22 @@ There are some minimal guarantees that may be expected of 
a CPU:
  (*) On any given CPU, dependent memory accesses will be issued in order, with
  respect to itself.  This means that for:
 
-   Q = P; D = *Q;
+   ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
 
  the CPU will issue the following memory operations:
 
Q = LOAD P, D = LOAD *Q
 
- and always in that order.
+ and always in that order.  On most systems, smp_read_barrier_depends()
+ does nothing, but it is required for DEC Alpha.  The ACCESS_ONCE()
+ is required to prevent compiler mischief.  Please note that you
+ should normally use something like rcu_dereference() instead of
+ open-coding smp_read_barrier_depends().
 
  (*) Overlapping loads and stores within a particular CPU will appear to be
  ordered within that CPU.  This means that for:
 
-   a = *X; *X = b;
+   a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
 
  the CPU will only issue the following sequence of memory operations:
 
@@ -213,7 +217,7 @@ There are some minimal guarantees that may be expected of a 
CPU:
 
  And for:
 
-   *X = c; d = *X;
+   ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
 
  the CPU will only issue:
 
@@ -224,6 +228,41 @@ There are some minimal guarantees that may be expected of 
a CPU:
 
 And there are a number of things that _must_ or _must_not_ be assumed:
 
+ (*) It _must_not_ be assumed that the compiler will do what you want with
+ memory references that are not protected by ACCESS_ONCE().  Without
+ ACCESS_ONCE(), the compiler is within its rights to do all sorts
+ of "creative" transformations:
+
+ (-) Repeat the load, possibly getting a different value on the second
+ and subsequent loads.  This is especially prone to happen when
+register pressure is high.
+
+ (-) Merge adjacent loads and stores to the same location.  The most
+ familiar example is the transformation from:
+
+   while (a)
+   do_something();
+
+ to something like:
+
+   if (a)
+   for (;;)
+   do_something();
+
+ Using ACCESS_ONCE() as follows prevents this sort of optimization:
+
+   while (ACCESS_ONCE(a))
+   do_something();
+
+ (-) "Store tearing", where a single store in the source code is split
+ into smaller stores in the object code.  Note that gcc really
+will do this on some architectures when storing certain constants.
+It can be cheaper to do a series of immediate stores than to
+form the constant in a register and then to store that register.
+
+ (-) "Load tearing", which splits loads in a manner analogous to
+store tearing.
+
  (*) It _must_not_ be assumed that independent loads and stores will be issued
  in the order given.  This means that for:
 
@@ -450,14 +489,14 @@ The usage requirements of data dependency barriers are a 
little subtle, and
 it's not always obvious that they're needed.  To illustrate, consider the
 following sequence of events:
 
-   CPU 1   CPU 2
-   === ===
+   CPU 1 CPU 2
+   ===   ===
{ A == 1, B == 2, C = 3, P == , Q ==  }
B = 4;

-   P = 
-   Q = P;
-   D = *Q;
+   ACCESS_ONCE(P) = 
+ Q = ACCESS_ONCE(P);
+ D = *Q;
 
 There's a clear data dependency here, and it would seem that by the end of the
 sequence, Q must be either  or , and that:
@@ -477,15 +516,15 @@ Alpha).
 To deal with 

[tip:core/locking] Documentation/memory-barriers.txt: Document ACCESS_ONCE()

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  692118dac47e65f5131686b1103ebfebf0cbfa8e
Gitweb: http://git.kernel.org/tip/692118dac47e65f5131686b1103ebfebf0cbfa8e
Author: Paul E. McKenney 
AuthorDate: Wed, 11 Dec 2013 13:59:07 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 16 Dec 2013 11:36:12 +0100

Documentation/memory-barriers.txt: Document ACCESS_ONCE()

The situations in which ACCESS_ONCE() is required are not well
documented, so this commit adds some verbiage to
memory-barriers.txt.

Reported-by: Peter Zijlstra 
Signed-off-by: Paul E. McKenney 
Reviewed-by: Josh Triplett 
Reviewed-by: Peter Zijlstra 
Cc: 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Link: 
http://lkml.kernel.org/r/1386799151-2219-4-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 306 +-
 1 file changed, 271 insertions(+), 35 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index deafa36..919fd60 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -231,37 +231,8 @@ And there are a number of things that _must_ or _must_not_ 
be assumed:
  (*) It _must_not_ be assumed that the compiler will do what you want with
  memory references that are not protected by ACCESS_ONCE().  Without
  ACCESS_ONCE(), the compiler is within its rights to do all sorts
- of "creative" transformations:
-
- (-) Repeat the load, possibly getting a different value on the second
- and subsequent loads.  This is especially prone to happen when
-register pressure is high.
-
- (-) Merge adjacent loads and stores to the same location.  The most
- familiar example is the transformation from:
-
-   while (a)
-   do_something();
-
- to something like:
-
-   if (a)
-   for (;;)
-   do_something();
-
- Using ACCESS_ONCE() as follows prevents this sort of optimization:
-
-   while (ACCESS_ONCE(a))
-   do_something();
-
- (-) "Store tearing", where a single store in the source code is split
- into smaller stores in the object code.  Note that gcc really
-will do this on some architectures when storing certain constants.
-It can be cheaper to do a series of immediate stores than to
-form the constant in a register and then to store that register.
-
- (-) "Load tearing", which splits loads in a manner analogous to
-store tearing.
+ of "creative" transformations, which are covered in the Compiler
+ Barrier section.
 
  (*) It _must_not_ be assumed that independent loads and stores will be issued
  in the order given.  This means that for:
@@ -749,7 +720,8 @@ In summary:
 
   (*) Control dependencies require that the compiler avoid reordering the
   dependency into nonexistence.  Careful use of ACCESS_ONCE() or
-  barrier() can help to preserve your control dependency.
+  barrier() can help to preserve your control dependency.  Please
+  see the Compiler Barrier section for more information.
 
   (*) Control dependencies do -not- provide transitivity.  If you
   need transitivity, use smp_mb().
@@ -1248,12 +1220,276 @@ compiler from moving the memory accesses either side 
of it to the other side:
barrier();
 
 This is a general barrier -- there are no read-read or write-write variants
-of barrier().  Howevever, ACCESS_ONCE() can be thought of as a weak form
+of barrier().  However, ACCESS_ONCE() can be thought of as a weak form
 for barrier() that affects only the specific accesses flagged by the
 ACCESS_ONCE().
 
-The compiler barrier has no direct effect on the CPU, which may then reorder
-things however it wishes.
+The barrier() function has the following effects:
+
+ (*) Prevents the compiler from reordering accesses following the
+ barrier() to precede any accesses preceding the barrier().
+ One example use for this property is to ease communication between
+ interrupt-handler code and the code that was interrupted.
+
+ (*) Within a loop, forces the compiler to load the variables used
+ in that loop's conditional on each pass through that loop.
+
+The ACCESS_ONCE() function can prevent any number of optimizations that,
+while perfectly safe in single-threaded code, can be fatal in concurrent
+code.  Here are some examples of these sorts of optimizations:
+
+ (*) The compiler is within its rights to merge successive loads from
+ the same variable.  Such merging can cause the compiler to "optimize"
+ the following code:
+
+   while (tmp = a)
+   do_something_with(tmp);
+
+ into the following code, which, although in some sense legitimate
+ for single-threaded code, is almost certainly not what the developer
+ intended:
+
+   if (tmp = a)
+   for (;;)
+   

[tip:core/locking] Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  17eb88e068430014deb709e5af34197cdf2390c9
Gitweb: http://git.kernel.org/tip/17eb88e068430014deb709e5af34197cdf2390c9
Author: Paul E. McKenney 
AuthorDate: Wed, 11 Dec 2013 13:59:09 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 16 Dec 2013 11:36:15 +0100

Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK

Historically, an UNLOCK+LOCK pair executed by one CPU, by one
task, or on a given lock variable has implied a full memory
barrier.  In a recent LKML thread, the wisdom of this historical
approach was called into question:
http://www.spinics.net/lists/linux-mm/msg65653.html, in part due
to the memory-order complexities of low-handoff-overhead queued
locks on x86 systems.

This patch therefore removes this guarantee from the
documentation, and further documents how to restore it via a new
smp_mb__after_unlock_lock() primitive.

Signed-off-by: Paul E. McKenney 
Reviewed-by: Peter Zijlstra 
Cc: 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Link: 
http://lkml.kernel.org/r/1386799151-2219-6-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 84 ---
 1 file changed, 69 insertions(+), 15 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 919fd60..cb753c8 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -402,12 +402,18 @@ And a couple of implicit varieties:
  Memory operations that occur after an UNLOCK operation may appear to
  happen before it completes.
 
- LOCK and UNLOCK operations are guaranteed to appear with respect to each
- other strictly in the order specified.
-
  The use of LOCK and UNLOCK operations generally precludes the need for
  other sorts of memory barrier (but note the exceptions mentioned in the
- subsection "MMIO write barrier").
+ subsection "MMIO write barrier").  In addition, an UNLOCK+LOCK pair
+ is -not- guaranteed to act as a full memory barrier.  However,
+ after a LOCK on a given lock variable, all memory accesses preceding any
+ prior UNLOCK on that same variable are guaranteed to be visible.
+ In other words, within a given lock variable's critical section,
+ all accesses of all previous critical sections for that lock variable
+ are guaranteed to have completed.
+
+ This means that LOCK acts as a minimal "acquire" operation and
+ UNLOCK acts as a minimal "release" operation.
 
 
 Memory barriers are only required where there's a possibility of interaction
@@ -1633,8 +1639,12 @@ for each construct.  These operations all imply certain 
barriers:
  Memory operations issued after the LOCK will be completed after the LOCK
  operation has completed.
 
- Memory operations issued before the LOCK may be completed after the LOCK
- operation has completed.
+ Memory operations issued before the LOCK may be completed after the
+ LOCK operation has completed.  An smp_mb__before_spinlock(), combined
+ with a following LOCK, orders prior loads against subsequent stores
+ and stores and prior stores against subsequent stores.  Note that
+ this is weaker than smp_mb()!  The smp_mb__before_spinlock()
+ primitive is free on many architectures.
 
  (2) UNLOCK operation implication:
 
@@ -1654,9 +1664,6 @@ for each construct.  These operations all imply certain 
barriers:
  All LOCK operations issued before an UNLOCK operation will be completed
  before the UNLOCK operation.
 
- All UNLOCK operations issued before a LOCK operation will be completed
- before the LOCK operation.
-
  (5) Failed conditional LOCK implication:
 
  Certain variants of the LOCK operation may fail, either due to being
@@ -1664,9 +1671,6 @@ for each construct.  These operations all imply certain 
barriers:
  signal whilst asleep waiting for the lock to become available.  Failed
  locks do not imply any sort of barrier.
 
-Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is
-equivalent to a full barrier, but a LOCK followed by an UNLOCK is not.
-
 [!] Note: one of the consequences of LOCKs and UNLOCKs being only one-way
 barriers is that the effects of instructions outside of a critical section
 may seep into the inside of the critical section.
@@ -1677,13 +1681,57 @@ LOCK, and an access following the UNLOCK to happen 
before the UNLOCK, and the
 two accesses can themselves then cross:
 
*A = a;
-   LOCK
-   UNLOCK
+   LOCK M
+   UNLOCK M
*B = b;
 
 may occur as:
 
-   LOCK, STORE *B, STORE *A, UNLOCK
+   LOCK M, STORE *B, STORE *A, UNLOCK M
+
+This same reordering can of course occur if the LOCK and UNLOCK are
+to the same lock variable, but only from the perspective of another
+CPU not holding that lock.
+
+In short, an UNLOCK followed by a LOCK may -not- be assumed to be a full
+memory barrier because it is 

[tip:core/locking] powerpc: Full barrier for smp_mb__after_unlock_lock()

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  919fc6e34831d1c2b58bfb5ae261dc3facc9b269
Gitweb: http://git.kernel.org/tip/919fc6e34831d1c2b58bfb5ae261dc3facc9b269
Author: Paul E. McKenney 
AuthorDate: Wed, 11 Dec 2013 13:59:11 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 16 Dec 2013 11:36:18 +0100

powerpc: Full barrier for smp_mb__after_unlock_lock()

The powerpc lock acquisition sequence is as follows:

lwarx; cmpwi; bne; stwcx.; lwsync;

Lock release is as follows:

lwsync; stw;

If CPU 0 does a store (say, x=1) then a lock release, and CPU 1
does a lock acquisition then a load (say, r1=y), then there is
no guarantee of a full memory barrier between the store to 'x'
and the load from 'y'. To see this, suppose that CPUs 0 and 1
are hardware threads in the same core that share a store buffer,
and that CPU 2 is in some other core, and that CPU 2 does the
following:

y = 1; sync; r2 = x;

If 'x' and 'y' are both initially zero, then the lock
acquisition and release sequences above can result in r1 and r2
both being equal to zero, which could not happen if unlock+lock
was a full barrier.

This commit therefore makes powerpc's
smp_mb__after_unlock_lock() be a full barrier.

Signed-off-by: Paul E. McKenney 
Acked-by: Benjamin Herrenschmidt 
Reviewed-by: Peter Zijlstra 
Cc: Paul Mackerras 
Cc: linuxppc-...@lists.ozlabs.org
Cc: 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Link: 
http://lkml.kernel.org/r/1386799151-2219-8-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 arch/powerpc/include/asm/spinlock.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/spinlock.h 
b/arch/powerpc/include/asm/spinlock.h
index 5f54a74..f6e78d6 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -28,6 +28,8 @@
 #include 
 #include 
 
+#define smp_mb__after_unlock_lock()smp_mb()  /* Full ordering for lock. */
+
 #define arch_spin_is_locked(x) ((x)->slock != 0)
 
 #ifdef CONFIG_PPC64
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] rcu: Apply smp_mb__after_unlock_lock() to preserve grace periods

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  6303b9c87d52eaedc82968d3ff59c471e7682afc
Gitweb: http://git.kernel.org/tip/6303b9c87d52eaedc82968d3ff59c471e7682afc
Author: Paul E. McKenney 
AuthorDate: Wed, 11 Dec 2013 13:59:10 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 16 Dec 2013 11:36:16 +0100

rcu: Apply smp_mb__after_unlock_lock() to preserve grace periods

RCU must ensure that there is the equivalent of a full memory
barrier between any memory access preceding grace period and any
memory access following that same grace period, regardless of
which CPU(s) happen to execute the two memory accesses.
Therefore, downgrading UNLOCK+LOCK to no longer imply a full
memory barrier requires some adjustments to RCU.

This commit therefore adds smp_mb__after_unlock_lock()
invocations as needed after the RCU lock acquisitions that need
to be part of a full-memory-barrier UNLOCK+LOCK.

Signed-off-by: Paul E. McKenney 
Reviewed-by: Peter Zijlstra 
Cc: 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Link: 
http://lkml.kernel.org/r/1386799151-2219-7-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 kernel/rcu/tree.c| 18 +-
 kernel/rcu/tree_plugin.h | 13 +
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index dd08198..a6205a0 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1133,8 +1133,10 @@ rcu_start_future_gp(struct rcu_node *rnp, struct 
rcu_data *rdp)
 * hold it, acquire the root rcu_node structure's lock in order to
 * start one (if needed).
 */
-   if (rnp != rnp_root)
+   if (rnp != rnp_root) {
raw_spin_lock(_root->lock);
+   smp_mb__after_unlock_lock();
+   }
 
/*
 * Get a new grace-period number.  If there really is no grace
@@ -1354,6 +1356,7 @@ static void note_gp_changes(struct rcu_state *rsp, struct 
rcu_data *rdp)
local_irq_restore(flags);
return;
}
+   smp_mb__after_unlock_lock();
__note_gp_changes(rsp, rnp, rdp);
raw_spin_unlock_irqrestore(>lock, flags);
 }
@@ -1368,6 +1371,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
 
rcu_bind_gp_kthread();
raw_spin_lock_irq(>lock);
+   smp_mb__after_unlock_lock();
if (rsp->gp_flags == 0) {
/* Spurious wakeup, tell caller to go back to sleep.  */
raw_spin_unlock_irq(>lock);
@@ -1409,6 +1413,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
 */
rcu_for_each_node_breadth_first(rsp, rnp) {
raw_spin_lock_irq(>lock);
+   smp_mb__after_unlock_lock();
rdp = this_cpu_ptr(rsp->rda);
rcu_preempt_check_blocked_tasks(rnp);
rnp->qsmask = rnp->qsmaskinit;
@@ -1463,6 +1468,7 @@ static int rcu_gp_fqs(struct rcu_state *rsp, int 
fqs_state_in)
/* Clear flag to prevent immediate re-entry. */
if (ACCESS_ONCE(rsp->gp_flags) & RCU_GP_FLAG_FQS) {
raw_spin_lock_irq(>lock);
+   smp_mb__after_unlock_lock();
rsp->gp_flags &= ~RCU_GP_FLAG_FQS;
raw_spin_unlock_irq(>lock);
}
@@ -1480,6 +1486,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
struct rcu_node *rnp = rcu_get_root(rsp);
 
raw_spin_lock_irq(>lock);
+   smp_mb__after_unlock_lock();
gp_duration = jiffies - rsp->gp_start;
if (gp_duration > rsp->gp_max)
rsp->gp_max = gp_duration;
@@ -1505,6 +1512,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 */
rcu_for_each_node_breadth_first(rsp, rnp) {
raw_spin_lock_irq(>lock);
+   smp_mb__after_unlock_lock();
ACCESS_ONCE(rnp->completed) = rsp->gpnum;
rdp = this_cpu_ptr(rsp->rda);
if (rnp == rdp->mynode)
@@ -1515,6 +1523,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
}
rnp = rcu_get_root(rsp);
raw_spin_lock_irq(>lock);
+   smp_mb__after_unlock_lock();
rcu_nocb_gp_set(rnp, nocb);
 
rsp->completed = rsp->gpnum; /* Declare grace period done. */
@@ -1749,6 +1758,7 @@ rcu_report_qs_rnp(unsigned long mask, struct rcu_state 
*rsp,
rnp_c = rnp;
rnp = rnp->parent;
raw_spin_lock_irqsave(>lock, flags);
+   smp_mb__after_unlock_lock();
WARN_ON_ONCE(rnp_c->qsmask);
}
 
@@ -1778,6 +1788,7 @@ rcu_report_qs_rdp(int cpu, struct rcu_state *rsp, struct 
rcu_data *rdp)
 
rnp = rdp->mynode;
raw_spin_lock_irqsave(>lock, flags);
+   smp_mb__after_unlock_lock();
if (rdp->passed_quiesce == 0 || rdp->gpnum != rnp->gpnum ||
rnp->completed == rnp->gpnum) {
 
@@ -1992,6 +2003,7 @@ static void rcu_cleanup_dead_cpu(int cpu, struct 
rcu_state *rsp)
mask = rdp->grpmask;/* rnp->grplo is constant. */
do {

[tip:core/locking] Documentation/memory-barriers.txt: Add long atomic examples to memory-barriers.txt

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  fb2b581968db140586e8d7db38ff278f60872313
Gitweb: http://git.kernel.org/tip/fb2b581968db140586e8d7db38ff278f60872313
Author: Paul E. McKenney 
AuthorDate: Wed, 11 Dec 2013 13:59:05 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 16 Dec 2013 11:36:09 +0100

Documentation/memory-barriers.txt: Add long atomic examples to 
memory-barriers.txt

Although the atomic_long_t functions are quite useful, they are
a bit obscure.  This commit therefore adds the common ones
alongside their atomic_t counterparts in
Documentation/memory-barriers.txt.

Signed-off-by: Paul E. McKenney 
Reviewed-by: Josh Triplett 
Reviewed-by: Peter Zijlstra 
Cc: 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Link: 
http://lkml.kernel.org/r/1386799151-2219-2-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 24 +---
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 1d06723..2d22da0 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1728,21 +1728,23 @@ explicit lock operations, described later).  These 
include:
 
xchg();
cmpxchg();
-   atomic_xchg();
-   atomic_cmpxchg();
-   atomic_inc_return();
-   atomic_dec_return();
-   atomic_add_return();
-   atomic_sub_return();
-   atomic_inc_and_test();
-   atomic_dec_and_test();
-   atomic_sub_and_test();
-   atomic_add_negative();
-   atomic_add_unless();/* when succeeds (returns 1) */
+   atomic_xchg();  atomic_long_xchg();
+   atomic_cmpxchg();   atomic_long_cmpxchg();
+   atomic_inc_return();atomic_long_inc_return();
+   atomic_dec_return();atomic_long_dec_return();
+   atomic_add_return();atomic_long_add_return();
+   atomic_sub_return();atomic_long_sub_return();
+   atomic_inc_and_test();  atomic_long_inc_and_test();
+   atomic_dec_and_test();  atomic_long_dec_and_test();
+   atomic_sub_and_test();  atomic_long_sub_and_test();
+   atomic_add_negative();  atomic_long_add_negative();
test_and_set_bit();
test_and_clear_bit();
test_and_change_bit();
 
+   /* when succeeds (returns 1) */
+   atomic_add_unless();atomic_long_add_unless();
+
 These are used for such things as implementing LOCK-class and UNLOCK-class
 operations and adjusting reference counters towards object destruction, and as
 such the implicit memory barrier effects are necessary.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] locking: Add an smp_mb__after_unlock_lock() for UNLOCK+BLOCK barrier

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  01352fb81658cbf78c55844de8e3d1d606bbf3f8
Gitweb: http://git.kernel.org/tip/01352fb81658cbf78c55844de8e3d1d606bbf3f8
Author: Paul E. McKenney 
AuthorDate: Wed, 11 Dec 2013 13:59:08 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 16 Dec 2013 11:36:13 +0100

locking: Add an smp_mb__after_unlock_lock() for UNLOCK+BLOCK barrier

The Linux kernel has traditionally required that an UNLOCK+LOCK
pair act as a full memory barrier when either (1) that
UNLOCK+LOCK pair was executed by the same CPU or task, or (2)
the same lock variable was used for the UNLOCK and LOCK.  It now
seems likely that very few places in the kernel rely on this
full-memory-barrier semantic, and with the advent of queued
locks, providing this semantic either requires complex
reasoning, or for some architectures, added overhead.

This commit therefore adds a smp_mb__after_unlock_lock(), which
may be placed after a LOCK primitive to restore the
full-memory-barrier semantic. All definitions are currently
no-ops, but will be upgraded for some architectures when queued
locks arrive.

Signed-off-by: Paul E. McKenney 
Reviewed-by: Peter Zijlstra 
Cc: 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Link: 
http://lkml.kernel.org/r/1386799151-2219-5-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 include/linux/spinlock.h | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 75f3494..3f2867f 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -130,6 +130,16 @@ do {   
\
 #define smp_mb__before_spinlock()  smp_wmb()
 #endif
 
+/*
+ * Place this after a lock-acquisition primitive to guarantee that
+ * an UNLOCK+LOCK pair act as a full barrier.  This guarantee applies
+ * if the UNLOCK and LOCK are executed by the same CPU or if the
+ * UNLOCK and LOCK operate on the same lock variable.
+ */
+#ifndef smp_mb__after_unlock_lock
+#define smp_mb__after_unlock_lock()do { } while (0)
+#endif
+
 /**
  * raw_spin_unlock_wait - wait until the spinlock gets unlocked
  * @lock: the spinlock in question.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] locking: Add an smp_mb__after_unlock_lock() for UNLOCK+BLOCK barrier

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  01352fb81658cbf78c55844de8e3d1d606bbf3f8
Gitweb: http://git.kernel.org/tip/01352fb81658cbf78c55844de8e3d1d606bbf3f8
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
AuthorDate: Wed, 11 Dec 2013 13:59:08 -0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Mon, 16 Dec 2013 11:36:13 +0100

locking: Add an smp_mb__after_unlock_lock() for UNLOCK+BLOCK barrier

The Linux kernel has traditionally required that an UNLOCK+LOCK
pair act as a full memory barrier when either (1) that
UNLOCK+LOCK pair was executed by the same CPU or task, or (2)
the same lock variable was used for the UNLOCK and LOCK.  It now
seems likely that very few places in the kernel rely on this
full-memory-barrier semantic, and with the advent of queued
locks, providing this semantic either requires complex
reasoning, or for some architectures, added overhead.

This commit therefore adds a smp_mb__after_unlock_lock(), which
may be placed after a LOCK primitive to restore the
full-memory-barrier semantic. All definitions are currently
no-ops, but will be upgraded for some architectures when queued
locks arrive.

Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Reviewed-by: Peter Zijlstra a.p.zijls...@chello.nl
Cc: linux-a...@vger.kernel.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Link: 
http://lkml.kernel.org/r/1386799151-2219-5-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 include/linux/spinlock.h | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 75f3494..3f2867f 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -130,6 +130,16 @@ do {   
\
 #define smp_mb__before_spinlock()  smp_wmb()
 #endif
 
+/*
+ * Place this after a lock-acquisition primitive to guarantee that
+ * an UNLOCK+LOCK pair act as a full barrier.  This guarantee applies
+ * if the UNLOCK and LOCK are executed by the same CPU or if the
+ * UNLOCK and LOCK operate on the same lock variable.
+ */
+#ifndef smp_mb__after_unlock_lock
+#define smp_mb__after_unlock_lock()do { } while (0)
+#endif
+
 /**
  * raw_spin_unlock_wait - wait until the spinlock gets unlocked
  * @lock: the spinlock in question.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] Documentation/memory-barriers.txt: Add long atomic examples to memory-barriers.txt

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  fb2b581968db140586e8d7db38ff278f60872313
Gitweb: http://git.kernel.org/tip/fb2b581968db140586e8d7db38ff278f60872313
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
AuthorDate: Wed, 11 Dec 2013 13:59:05 -0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Mon, 16 Dec 2013 11:36:09 +0100

Documentation/memory-barriers.txt: Add long atomic examples to 
memory-barriers.txt

Although the atomic_long_t functions are quite useful, they are
a bit obscure.  This commit therefore adds the common ones
alongside their atomic_t counterparts in
Documentation/memory-barriers.txt.

Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Reviewed-by: Josh Triplett j...@joshtriplett.org
Reviewed-by: Peter Zijlstra a.p.zijls...@chello.nl
Cc: linux-a...@vger.kernel.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Link: 
http://lkml.kernel.org/r/1386799151-2219-2-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 Documentation/memory-barriers.txt | 24 +---
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 1d06723..2d22da0 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1728,21 +1728,23 @@ explicit lock operations, described later).  These 
include:
 
xchg();
cmpxchg();
-   atomic_xchg();
-   atomic_cmpxchg();
-   atomic_inc_return();
-   atomic_dec_return();
-   atomic_add_return();
-   atomic_sub_return();
-   atomic_inc_and_test();
-   atomic_dec_and_test();
-   atomic_sub_and_test();
-   atomic_add_negative();
-   atomic_add_unless();/* when succeeds (returns 1) */
+   atomic_xchg();  atomic_long_xchg();
+   atomic_cmpxchg();   atomic_long_cmpxchg();
+   atomic_inc_return();atomic_long_inc_return();
+   atomic_dec_return();atomic_long_dec_return();
+   atomic_add_return();atomic_long_add_return();
+   atomic_sub_return();atomic_long_sub_return();
+   atomic_inc_and_test();  atomic_long_inc_and_test();
+   atomic_dec_and_test();  atomic_long_dec_and_test();
+   atomic_sub_and_test();  atomic_long_sub_and_test();
+   atomic_add_negative();  atomic_long_add_negative();
test_and_set_bit();
test_and_clear_bit();
test_and_change_bit();
 
+   /* when succeeds (returns 1) */
+   atomic_add_unless();atomic_long_add_unless();
+
 These are used for such things as implementing LOCK-class and UNLOCK-class
 operations and adjusting reference counters towards object destruction, and as
 such the implicit memory barrier effects are necessary.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] rcu: Apply smp_mb__after_unlock_lock() to preserve grace periods

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  6303b9c87d52eaedc82968d3ff59c471e7682afc
Gitweb: http://git.kernel.org/tip/6303b9c87d52eaedc82968d3ff59c471e7682afc
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
AuthorDate: Wed, 11 Dec 2013 13:59:10 -0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Mon, 16 Dec 2013 11:36:16 +0100

rcu: Apply smp_mb__after_unlock_lock() to preserve grace periods

RCU must ensure that there is the equivalent of a full memory
barrier between any memory access preceding grace period and any
memory access following that same grace period, regardless of
which CPU(s) happen to execute the two memory accesses.
Therefore, downgrading UNLOCK+LOCK to no longer imply a full
memory barrier requires some adjustments to RCU.

This commit therefore adds smp_mb__after_unlock_lock()
invocations as needed after the RCU lock acquisitions that need
to be part of a full-memory-barrier UNLOCK+LOCK.

Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Reviewed-by: Peter Zijlstra a.p.zijls...@chello.nl
Cc: linux-a...@vger.kernel.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Link: 
http://lkml.kernel.org/r/1386799151-2219-7-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 kernel/rcu/tree.c| 18 +-
 kernel/rcu/tree_plugin.h | 13 +
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index dd08198..a6205a0 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1133,8 +1133,10 @@ rcu_start_future_gp(struct rcu_node *rnp, struct 
rcu_data *rdp)
 * hold it, acquire the root rcu_node structure's lock in order to
 * start one (if needed).
 */
-   if (rnp != rnp_root)
+   if (rnp != rnp_root) {
raw_spin_lock(rnp_root-lock);
+   smp_mb__after_unlock_lock();
+   }
 
/*
 * Get a new grace-period number.  If there really is no grace
@@ -1354,6 +1356,7 @@ static void note_gp_changes(struct rcu_state *rsp, struct 
rcu_data *rdp)
local_irq_restore(flags);
return;
}
+   smp_mb__after_unlock_lock();
__note_gp_changes(rsp, rnp, rdp);
raw_spin_unlock_irqrestore(rnp-lock, flags);
 }
@@ -1368,6 +1371,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
 
rcu_bind_gp_kthread();
raw_spin_lock_irq(rnp-lock);
+   smp_mb__after_unlock_lock();
if (rsp-gp_flags == 0) {
/* Spurious wakeup, tell caller to go back to sleep.  */
raw_spin_unlock_irq(rnp-lock);
@@ -1409,6 +1413,7 @@ static int rcu_gp_init(struct rcu_state *rsp)
 */
rcu_for_each_node_breadth_first(rsp, rnp) {
raw_spin_lock_irq(rnp-lock);
+   smp_mb__after_unlock_lock();
rdp = this_cpu_ptr(rsp-rda);
rcu_preempt_check_blocked_tasks(rnp);
rnp-qsmask = rnp-qsmaskinit;
@@ -1463,6 +1468,7 @@ static int rcu_gp_fqs(struct rcu_state *rsp, int 
fqs_state_in)
/* Clear flag to prevent immediate re-entry. */
if (ACCESS_ONCE(rsp-gp_flags)  RCU_GP_FLAG_FQS) {
raw_spin_lock_irq(rnp-lock);
+   smp_mb__after_unlock_lock();
rsp-gp_flags = ~RCU_GP_FLAG_FQS;
raw_spin_unlock_irq(rnp-lock);
}
@@ -1480,6 +1486,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
struct rcu_node *rnp = rcu_get_root(rsp);
 
raw_spin_lock_irq(rnp-lock);
+   smp_mb__after_unlock_lock();
gp_duration = jiffies - rsp-gp_start;
if (gp_duration  rsp-gp_max)
rsp-gp_max = gp_duration;
@@ -1505,6 +1512,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 */
rcu_for_each_node_breadth_first(rsp, rnp) {
raw_spin_lock_irq(rnp-lock);
+   smp_mb__after_unlock_lock();
ACCESS_ONCE(rnp-completed) = rsp-gpnum;
rdp = this_cpu_ptr(rsp-rda);
if (rnp == rdp-mynode)
@@ -1515,6 +1523,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
}
rnp = rcu_get_root(rsp);
raw_spin_lock_irq(rnp-lock);
+   smp_mb__after_unlock_lock();
rcu_nocb_gp_set(rnp, nocb);
 
rsp-completed = rsp-gpnum; /* Declare grace period done. */
@@ -1749,6 +1758,7 @@ rcu_report_qs_rnp(unsigned long mask, struct rcu_state 
*rsp,
rnp_c = rnp;
rnp = rnp-parent;
raw_spin_lock_irqsave(rnp-lock, flags);
+   smp_mb__after_unlock_lock();
WARN_ON_ONCE(rnp_c-qsmask);
}
 
@@ -1778,6 +1788,7 @@ rcu_report_qs_rdp(int cpu, struct rcu_state *rsp, struct 
rcu_data *rdp)
 
rnp = rdp-mynode;
raw_spin_lock_irqsave(rnp-lock, flags);
+   smp_mb__after_unlock_lock();
if (rdp-passed_quiesce == 0 || rdp-gpnum != rnp-gpnum ||
rnp-completed == 

[tip:core/locking] Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  17eb88e068430014deb709e5af34197cdf2390c9
Gitweb: http://git.kernel.org/tip/17eb88e068430014deb709e5af34197cdf2390c9
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
AuthorDate: Wed, 11 Dec 2013 13:59:09 -0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Mon, 16 Dec 2013 11:36:15 +0100

Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK

Historically, an UNLOCK+LOCK pair executed by one CPU, by one
task, or on a given lock variable has implied a full memory
barrier.  In a recent LKML thread, the wisdom of this historical
approach was called into question:
http://www.spinics.net/lists/linux-mm/msg65653.html, in part due
to the memory-order complexities of low-handoff-overhead queued
locks on x86 systems.

This patch therefore removes this guarantee from the
documentation, and further documents how to restore it via a new
smp_mb__after_unlock_lock() primitive.

Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Reviewed-by: Peter Zijlstra a.p.zijls...@chello.nl
Cc: linux-a...@vger.kernel.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Link: 
http://lkml.kernel.org/r/1386799151-2219-6-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 Documentation/memory-barriers.txt | 84 ---
 1 file changed, 69 insertions(+), 15 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 919fd60..cb753c8 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -402,12 +402,18 @@ And a couple of implicit varieties:
  Memory operations that occur after an UNLOCK operation may appear to
  happen before it completes.
 
- LOCK and UNLOCK operations are guaranteed to appear with respect to each
- other strictly in the order specified.
-
  The use of LOCK and UNLOCK operations generally precludes the need for
  other sorts of memory barrier (but note the exceptions mentioned in the
- subsection MMIO write barrier).
+ subsection MMIO write barrier).  In addition, an UNLOCK+LOCK pair
+ is -not- guaranteed to act as a full memory barrier.  However,
+ after a LOCK on a given lock variable, all memory accesses preceding any
+ prior UNLOCK on that same variable are guaranteed to be visible.
+ In other words, within a given lock variable's critical section,
+ all accesses of all previous critical sections for that lock variable
+ are guaranteed to have completed.
+
+ This means that LOCK acts as a minimal acquire operation and
+ UNLOCK acts as a minimal release operation.
 
 
 Memory barriers are only required where there's a possibility of interaction
@@ -1633,8 +1639,12 @@ for each construct.  These operations all imply certain 
barriers:
  Memory operations issued after the LOCK will be completed after the LOCK
  operation has completed.
 
- Memory operations issued before the LOCK may be completed after the LOCK
- operation has completed.
+ Memory operations issued before the LOCK may be completed after the
+ LOCK operation has completed.  An smp_mb__before_spinlock(), combined
+ with a following LOCK, orders prior loads against subsequent stores
+ and stores and prior stores against subsequent stores.  Note that
+ this is weaker than smp_mb()!  The smp_mb__before_spinlock()
+ primitive is free on many architectures.
 
  (2) UNLOCK operation implication:
 
@@ -1654,9 +1664,6 @@ for each construct.  These operations all imply certain 
barriers:
  All LOCK operations issued before an UNLOCK operation will be completed
  before the UNLOCK operation.
 
- All UNLOCK operations issued before a LOCK operation will be completed
- before the LOCK operation.
-
  (5) Failed conditional LOCK implication:
 
  Certain variants of the LOCK operation may fail, either due to being
@@ -1664,9 +1671,6 @@ for each construct.  These operations all imply certain 
barriers:
  signal whilst asleep waiting for the lock to become available.  Failed
  locks do not imply any sort of barrier.
 
-Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is
-equivalent to a full barrier, but a LOCK followed by an UNLOCK is not.
-
 [!] Note: one of the consequences of LOCKs and UNLOCKs being only one-way
 barriers is that the effects of instructions outside of a critical section
 may seep into the inside of the critical section.
@@ -1677,13 +1681,57 @@ LOCK, and an access following the UNLOCK to happen 
before the UNLOCK, and the
 two accesses can themselves then cross:
 
*A = a;
-   LOCK
-   UNLOCK
+   LOCK M
+   UNLOCK M
*B = b;
 
 may occur as:
 
-   LOCK, STORE *B, STORE *A, UNLOCK
+   LOCK M, STORE *B, STORE *A, UNLOCK M
+
+This same reordering can of course occur if the LOCK and UNLOCK are
+to the same lock variable, but 

[tip:core/locking] powerpc: Full barrier for smp_mb__after_unlock_lock()

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  919fc6e34831d1c2b58bfb5ae261dc3facc9b269
Gitweb: http://git.kernel.org/tip/919fc6e34831d1c2b58bfb5ae261dc3facc9b269
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
AuthorDate: Wed, 11 Dec 2013 13:59:11 -0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Mon, 16 Dec 2013 11:36:18 +0100

powerpc: Full barrier for smp_mb__after_unlock_lock()

The powerpc lock acquisition sequence is as follows:

lwarx; cmpwi; bne; stwcx.; lwsync;

Lock release is as follows:

lwsync; stw;

If CPU 0 does a store (say, x=1) then a lock release, and CPU 1
does a lock acquisition then a load (say, r1=y), then there is
no guarantee of a full memory barrier between the store to 'x'
and the load from 'y'. To see this, suppose that CPUs 0 and 1
are hardware threads in the same core that share a store buffer,
and that CPU 2 is in some other core, and that CPU 2 does the
following:

y = 1; sync; r2 = x;

If 'x' and 'y' are both initially zero, then the lock
acquisition and release sequences above can result in r1 and r2
both being equal to zero, which could not happen if unlock+lock
was a full barrier.

This commit therefore makes powerpc's
smp_mb__after_unlock_lock() be a full barrier.

Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Acked-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Reviewed-by: Peter Zijlstra a.p.zijls...@chello.nl
Cc: Paul Mackerras pau...@samba.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux-a...@vger.kernel.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Link: 
http://lkml.kernel.org/r/1386799151-2219-8-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 arch/powerpc/include/asm/spinlock.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/spinlock.h 
b/arch/powerpc/include/asm/spinlock.h
index 5f54a74..f6e78d6 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -28,6 +28,8 @@
 #include asm/synch.h
 #include asm/ppc-opcode.h
 
+#define smp_mb__after_unlock_lock()smp_mb()  /* Full ordering for lock. */
+
 #define arch_spin_is_locked(x) ((x)-slock != 0)
 
 #ifdef CONFIG_PPC64
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] Documentation/memory-barriers.txt: Document ACCESS_ONCE()

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  692118dac47e65f5131686b1103ebfebf0cbfa8e
Gitweb: http://git.kernel.org/tip/692118dac47e65f5131686b1103ebfebf0cbfa8e
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
AuthorDate: Wed, 11 Dec 2013 13:59:07 -0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Mon, 16 Dec 2013 11:36:12 +0100

Documentation/memory-barriers.txt: Document ACCESS_ONCE()

The situations in which ACCESS_ONCE() is required are not well
documented, so this commit adds some verbiage to
memory-barriers.txt.

Reported-by: Peter Zijlstra pet...@infradead.org
Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Reviewed-by: Josh Triplett j...@joshtriplett.org
Reviewed-by: Peter Zijlstra a.p.zijls...@chello.nl
Cc: linux-a...@vger.kernel.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Link: 
http://lkml.kernel.org/r/1386799151-2219-4-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 Documentation/memory-barriers.txt | 306 +-
 1 file changed, 271 insertions(+), 35 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index deafa36..919fd60 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -231,37 +231,8 @@ And there are a number of things that _must_ or _must_not_ 
be assumed:
  (*) It _must_not_ be assumed that the compiler will do what you want with
  memory references that are not protected by ACCESS_ONCE().  Without
  ACCESS_ONCE(), the compiler is within its rights to do all sorts
- of creative transformations:
-
- (-) Repeat the load, possibly getting a different value on the second
- and subsequent loads.  This is especially prone to happen when
-register pressure is high.
-
- (-) Merge adjacent loads and stores to the same location.  The most
- familiar example is the transformation from:
-
-   while (a)
-   do_something();
-
- to something like:
-
-   if (a)
-   for (;;)
-   do_something();
-
- Using ACCESS_ONCE() as follows prevents this sort of optimization:
-
-   while (ACCESS_ONCE(a))
-   do_something();
-
- (-) Store tearing, where a single store in the source code is split
- into smaller stores in the object code.  Note that gcc really
-will do this on some architectures when storing certain constants.
-It can be cheaper to do a series of immediate stores than to
-form the constant in a register and then to store that register.
-
- (-) Load tearing, which splits loads in a manner analogous to
-store tearing.
+ of creative transformations, which are covered in the Compiler
+ Barrier section.
 
  (*) It _must_not_ be assumed that independent loads and stores will be issued
  in the order given.  This means that for:
@@ -749,7 +720,8 @@ In summary:
 
   (*) Control dependencies require that the compiler avoid reordering the
   dependency into nonexistence.  Careful use of ACCESS_ONCE() or
-  barrier() can help to preserve your control dependency.
+  barrier() can help to preserve your control dependency.  Please
+  see the Compiler Barrier section for more information.
 
   (*) Control dependencies do -not- provide transitivity.  If you
   need transitivity, use smp_mb().
@@ -1248,12 +1220,276 @@ compiler from moving the memory accesses either side 
of it to the other side:
barrier();
 
 This is a general barrier -- there are no read-read or write-write variants
-of barrier().  Howevever, ACCESS_ONCE() can be thought of as a weak form
+of barrier().  However, ACCESS_ONCE() can be thought of as a weak form
 for barrier() that affects only the specific accesses flagged by the
 ACCESS_ONCE().
 
-The compiler barrier has no direct effect on the CPU, which may then reorder
-things however it wishes.
+The barrier() function has the following effects:
+
+ (*) Prevents the compiler from reordering accesses following the
+ barrier() to precede any accesses preceding the barrier().
+ One example use for this property is to ease communication between
+ interrupt-handler code and the code that was interrupted.
+
+ (*) Within a loop, forces the compiler to load the variables used
+ in that loop's conditional on each pass through that loop.
+
+The ACCESS_ONCE() function can prevent any number of optimizations that,
+while perfectly safe in single-threaded code, can be fatal in concurrent
+code.  Here are some examples of these sorts of optimizations:
+
+ (*) The compiler is within its rights to merge successive loads from
+ the same variable.  Such merging can cause the compiler to optimize
+ the following code:
+
+   while (tmp = a)
+   do_something_with(tmp);
+
+ into the following code, 

[tip:core/locking] Documentation/memory-barriers.txt: Add needed ACCESS_ONCE() calls to memory-barriers.txt

2013-12-16 Thread tip-bot for Paul E. McKenney
Commit-ID:  2ecf810121c7eae34473b8fa108112036bc61127
Gitweb: http://git.kernel.org/tip/2ecf810121c7eae34473b8fa108112036bc61127
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
AuthorDate: Wed, 11 Dec 2013 13:59:04 -0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Mon, 16 Dec 2013 11:36:08 +0100

Documentation/memory-barriers.txt: Add needed ACCESS_ONCE() calls to 
memory-barriers.txt

The Documentation/memory-barriers.txt file was written before
the need for ACCESS_ONCE() was fully appreciated.  It therefore
contains no ACCESS_ONCE() calls, which can be a problem when
people lift examples from it.  This commit therefore adds
ACCESS_ONCE() calls.

Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Reviewed-by: Josh Triplett j...@joshtriplett.org
Reviewed-by: Peter Zijlstra a.p.zijls...@chello.nl
Cc: linux-a...@vger.kernel.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Link: 
http://lkml.kernel.org/r/1386799151-2219-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 Documentation/memory-barriers.txt | 206 +++---
 1 file changed, 126 insertions(+), 80 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 020cccd..1d06723 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -194,18 +194,22 @@ There are some minimal guarantees that may be expected of 
a CPU:
  (*) On any given CPU, dependent memory accesses will be issued in order, with
  respect to itself.  This means that for:
 
-   Q = P; D = *Q;
+   ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
 
  the CPU will issue the following memory operations:
 
Q = LOAD P, D = LOAD *Q
 
- and always in that order.
+ and always in that order.  On most systems, smp_read_barrier_depends()
+ does nothing, but it is required for DEC Alpha.  The ACCESS_ONCE()
+ is required to prevent compiler mischief.  Please note that you
+ should normally use something like rcu_dereference() instead of
+ open-coding smp_read_barrier_depends().
 
  (*) Overlapping loads and stores within a particular CPU will appear to be
  ordered within that CPU.  This means that for:
 
-   a = *X; *X = b;
+   a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
 
  the CPU will only issue the following sequence of memory operations:
 
@@ -213,7 +217,7 @@ There are some minimal guarantees that may be expected of a 
CPU:
 
  And for:
 
-   *X = c; d = *X;
+   ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
 
  the CPU will only issue:
 
@@ -224,6 +228,41 @@ There are some minimal guarantees that may be expected of 
a CPU:
 
 And there are a number of things that _must_ or _must_not_ be assumed:
 
+ (*) It _must_not_ be assumed that the compiler will do what you want with
+ memory references that are not protected by ACCESS_ONCE().  Without
+ ACCESS_ONCE(), the compiler is within its rights to do all sorts
+ of creative transformations:
+
+ (-) Repeat the load, possibly getting a different value on the second
+ and subsequent loads.  This is especially prone to happen when
+register pressure is high.
+
+ (-) Merge adjacent loads and stores to the same location.  The most
+ familiar example is the transformation from:
+
+   while (a)
+   do_something();
+
+ to something like:
+
+   if (a)
+   for (;;)
+   do_something();
+
+ Using ACCESS_ONCE() as follows prevents this sort of optimization:
+
+   while (ACCESS_ONCE(a))
+   do_something();
+
+ (-) Store tearing, where a single store in the source code is split
+ into smaller stores in the object code.  Note that gcc really
+will do this on some architectures when storing certain constants.
+It can be cheaper to do a series of immediate stores than to
+form the constant in a register and then to store that register.
+
+ (-) Load tearing, which splits loads in a manner analogous to
+store tearing.
+
  (*) It _must_not_ be assumed that independent loads and stores will be issued
  in the order given.  This means that for:
 
@@ -450,14 +489,14 @@ The usage requirements of data dependency barriers are a 
little subtle, and
 it's not always obvious that they're needed.  To illustrate, consider the
 following sequence of events:
 
-   CPU 1   CPU 2
-   === ===
+   CPU 1 CPU 2
+   ===   ===
{ A == 1, B == 2, C = 3, P == A, Q == C }
B = 4;
write barrier
-   P = B
-   Q = P;
-   D = *Q;
+   ACCESS_ONCE(P) = B
+ Q = 

[tip:perf/urgent] events: Protect access via task_subsys_state_check()

2013-04-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  c79aa0d96548aee50570209eb2d45c8f4ac49230
Gitweb: http://git.kernel.org/tip/c79aa0d96548aee50570209eb2d45c8f4ac49230
Author: Paul E. McKenney 
AuthorDate: Fri, 19 Apr 2013 12:01:24 -0700
Committer:  Ingo Molnar 
CommitDate: Sun, 21 Apr 2013 11:21:39 +0200

events: Protect access via task_subsys_state_check()

The following RCU splat indicates lack of RCU protection:

[  953.267649] ===
[  953.267652] [ INFO: suspicious RCU usage. ]
[  953.267657] 3.9.0-0.rc6.git2.4.fc19.ppc64p7 #1 Not tainted
[  953.267661] ---
[  953.267664] include/linux/cgroup.h:534 suspicious rcu_dereference_check() 
usage!
[  953.267669]
[  953.267669] other info that might help us debug this:
[  953.267669]
[  953.267675]
[  953.267675] rcu_scheduler_active = 1, debug_locks = 0
[  953.267680] 1 lock held by glxgears/1289:
[  953.267683]  #0:  (>cred_guard_mutex){+.+.+.}, at: [] 
.prepare_bprm_creds+0x34/0xa0
[  953.267700]
[  953.267700] stack backtrace:
[  953.267704] Call Trace:
[  953.267709] [c001f0d1b6e0] [c0016e30] .show_stack+0x130/0x200 
(unreliable)
[  953.267717] [c001f0d1b7b0] [c01267f8] 
.lockdep_rcu_suspicious+0x138/0x180
[  953.267724] [c001f0d1b840] [c01d43a4] 
.perf_event_comm+0x4c4/0x690
[  953.267731] [c001f0d1b950] [c027f6e4] .set_task_comm+0x84/0x1f0
[  953.267737] [c001f0d1b9f0] [c0280414] .setup_new_exec+0x94/0x220
[  953.267744] [c001f0d1ba70] [c02f665c] 
.load_elf_binary+0x58c/0x19b0
...

This commit therefore adds the required RCU read-side critical
section to perf_event_comm().

Reported-by: Adam Jackson 
Signed-off-by: Paul E. McKenney 
Cc: a.p.zijls...@chello.nl
Cc: pau...@samba.org
Cc: a...@ghostprotocols.net
Link: http://lkml.kernel.org/r/20130419190124.ga8...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
Tested-by: Gustavo Luiz Duarte 
---
 kernel/events/core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 4d3124b..9fcb094 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -4596,6 +4596,7 @@ void perf_event_comm(struct task_struct *task)
struct perf_event_context *ctx;
int ctxn;
 
+   rcu_read_lock();
for_each_task_context_nr(ctxn) {
ctx = task->perf_event_ctxp[ctxn];
if (!ctx)
@@ -4603,6 +4604,7 @@ void perf_event_comm(struct task_struct *task)
 
perf_event_enable_on_exec(ctx);
}
+   rcu_read_unlock();
 
if (!atomic_read(_comm_events))
return;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:perf/urgent] events: Protect access via task_subsys_state_check()

2013-04-21 Thread tip-bot for Paul E. McKenney
Commit-ID:  c79aa0d96548aee50570209eb2d45c8f4ac49230
Gitweb: http://git.kernel.org/tip/c79aa0d96548aee50570209eb2d45c8f4ac49230
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
AuthorDate: Fri, 19 Apr 2013 12:01:24 -0700
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Sun, 21 Apr 2013 11:21:39 +0200

events: Protect access via task_subsys_state_check()

The following RCU splat indicates lack of RCU protection:

[  953.267649] ===
[  953.267652] [ INFO: suspicious RCU usage. ]
[  953.267657] 3.9.0-0.rc6.git2.4.fc19.ppc64p7 #1 Not tainted
[  953.267661] ---
[  953.267664] include/linux/cgroup.h:534 suspicious rcu_dereference_check() 
usage!
[  953.267669]
[  953.267669] other info that might help us debug this:
[  953.267669]
[  953.267675]
[  953.267675] rcu_scheduler_active = 1, debug_locks = 0
[  953.267680] 1 lock held by glxgears/1289:
[  953.267683]  #0:  (sig-cred_guard_mutex){+.+.+.}, at: [c027f884] 
.prepare_bprm_creds+0x34/0xa0
[  953.267700]
[  953.267700] stack backtrace:
[  953.267704] Call Trace:
[  953.267709] [c001f0d1b6e0] [c0016e30] .show_stack+0x130/0x200 
(unreliable)
[  953.267717] [c001f0d1b7b0] [c01267f8] 
.lockdep_rcu_suspicious+0x138/0x180
[  953.267724] [c001f0d1b840] [c01d43a4] 
.perf_event_comm+0x4c4/0x690
[  953.267731] [c001f0d1b950] [c027f6e4] .set_task_comm+0x84/0x1f0
[  953.267737] [c001f0d1b9f0] [c0280414] .setup_new_exec+0x94/0x220
[  953.267744] [c001f0d1ba70] [c02f665c] 
.load_elf_binary+0x58c/0x19b0
...

This commit therefore adds the required RCU read-side critical
section to perf_event_comm().

Reported-by: Adam Jackson a...@redhat.com
Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: a.p.zijls...@chello.nl
Cc: pau...@samba.org
Cc: a...@ghostprotocols.net
Link: http://lkml.kernel.org/r/20130419190124.ga8...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar mi...@kernel.org
Tested-by: Gustavo Luiz Duarte gu...@br.ibm.com
---
 kernel/events/core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 4d3124b..9fcb094 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -4596,6 +4596,7 @@ void perf_event_comm(struct task_struct *task)
struct perf_event_context *ctx;
int ctxn;
 
+   rcu_read_lock();
for_each_task_context_nr(ctxn) {
ctx = task-perf_event_ctxp[ctxn];
if (!ctx)
@@ -4603,6 +4604,7 @@ void perf_event_comm(struct task_struct *task)
 
perf_event_enable_on_exec(ctx);
}
+   rcu_read_unlock();
 
if (!atomic_read(nr_comm_events))
return;
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/