Re: [PATCH] Selinux/hooks.c: Fix a NULL pointer dereference caused by semop()

2015-01-20 Thread Manfred Spraul
On 01/21/2015 04:53 AM, Ethan Zhao wrote: On Tue, Jan 20, 2015 at 10:10 PM, Stephen Smalley s...@tycho.nsa.gov wrote: On 01/20/2015 04:18 AM, Ethan Zhao wrote: sys_semget() -newary() -security_sem_alloc() -sem_alloc_security()

Re: [PATCH] Selinux/hooks.c: Fix a NULL pointer dereference caused by semop()

2015-01-22 Thread Manfred Spraul
On 01/22/2015 03:44 AM, Ethan Zhao wrote: On Wed, Jan 21, 2015 at 1:30 PM, Manfred Spraul manf...@colorfullife.com wrote: On 01/21/2015 04:53 AM, Ethan Zhao wrote: On Tue, Jan 20, 2015 at 10:10 PM, Stephen Smalley s...@tycho.nsa.gov wrote: On 01/20/2015 04:18 AM, Ethan Zhao wrote

Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles

2015-02-18 Thread Manfred Spraul
Hi Oleg, On 02/18/2015 04:59 PM, Oleg Nesterov wrote: Let's look at sem_lock(). I never looked at this code before, I can be easily wrong. Manfred will correct me. But at first glance we can write the oversimplified pseudo-code: spinlock_t local, global; bool my_lock(bool

[PATCH] ipc/sem.c: Update/correct memory barriers.

2015-02-28 Thread Manfred Spraul
care of adding it to a tree that is heading for Linus' tree? Signed-off-by: Manfred Spraul manf...@colorfullife.com Reported-by: Oleg Nesterov o...@redhat.com Cc: sta...@vger.kernel.org --- include/linux/spinlock.h | 10 ++ ipc/sem.c| 7 ++- 2 files changed, 16

Re: [PATCH] ipc/sem.c: Update/correct memory barriers.

2015-03-01 Thread Manfred Spraul
Hi Oleg, On 03/01/2015 02:22 PM, Oleg Nesterov wrote: On 02/28, Peter Zijlstra wrote: On Sat, Feb 28, 2015 at 09:36:15PM +0100, Manfred Spraul wrote: +/* + * Place this after a control barrier (such as e.g. a spin_unlock_wait()) + * to ensure that reads cannot be moved ahead

[PATCH] ipc/sem.c: Update/correct memory barriers

2015-03-01 Thread Manfred Spraul
.: starting from 3.10). Signed-off-by: Manfred Spraul manf...@colorfullife.com Reported-by: Oleg Nesterov o...@redhat.com Cc: sta...@vger.kernel.org --- include/linux/spinlock.h | 15 +++ ipc/sem.c| 8 2 files changed, 19 insertions(+), 4 deletions(-) diff

[RFC PATCH] ipc/sem.c: Add one more memory barrier to sem_lock().

2015-02-25 Thread Manfred Spraul
. But since the existing control boundary is a write memory barrier, it is cheaper use an smp_rmb(). Signed-off-by: Manfred Spraul manf...@colorfullife.com --- ipc/sem.c | 26 +- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/ipc/sem.c b/ipc/sem.c index 9284211

Re: [RFC PATCH] ipc/sem.c: Add one more memory barrier to sem_lock().

2015-02-26 Thread Manfred Spraul
Hi Oleg, On 02/26/2015 08:29 PM, Oleg Nesterov wrote: @@ -341,7 +359,13 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops, * Thus: if is now 0, then it will stay 0. */ if (sma-complex_count == 0) {

Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles

2015-02-20 Thread Manfred Spraul
Hi Oleg, my example was bad, let's continue with your example. And: If sem_lock() needs another smp_xmb(), then we must add it: Some apps do not have a user space hot path, i.e. it seems that on some setups, we have millions of calls per second. If there is a race, then it will happen. I've

Re: [PATCH v2] ipc/mqueue: remove STATE_PENDING

2015-04-29 Thread Manfred Spraul
Hi Davidlohr, On 04/28/2015 06:59 PM, Davidlohr Bueso wrote: On Tue, 2015-04-28 at 18:43 +0200, Peter Zijlstra wrote: Well, if you can 'guarantee' the cmpxchg will not fail, you can then rely on the fact that cmpxchg implies a full barrier, which would obviate the need for the wmb. Yes,

Re: [PATCH] spinlock: clarify doc for raw_spin_unlock_wait()

2015-04-29 Thread Manfred Spraul
-by: Chris Metcalf cmetc...@ezchip.com sysvsem depends on this definition, i.e. a false early return can cause a corrupted semaphore state. Acked-by: Manfred Spraul manf...@colorfullife.com --- On 04/28/2015 12:24 PM, Peter Zijlstra wrote: I think it must not return before the lock holder

Re: [PATCH 3/3] ipc/mqueue: remove STATE_PENDING

2015-04-07 Thread Manfred Spraul
On 04/07/2015 05:03 PM, Sebastian Andrzej Siewior wrote: This patch moves the wakeup_process() invocation so it is not done under the info-lock. With this change, the waiter is woken up once it is ready which means its state is STATE_READY and it does not need to loop on SMP if it is still in

Re: [PATCH 2/2] ipc,msg: provide barrier pairings for lockless receive

2015-06-04 Thread Manfred Spraul
On 05/30/2015 02:03 AM, Davidlohr Bueso wrote: We currently use a full barrier on the sender side to to avoid receiver tasks disappearing on us while still performing on the sender side wakeup. We lack however, the proper CPU-CPU interactions pairing on the receiver side which busy-waits for the

Re: [PATCH 1/2] ipc,shm: move BUG_ON check into shm_lock

2015-06-04 Thread Manfred Spraul
Hi Davidlohr, On 05/30/2015 02:03 AM, Davidlohr Bueso wrote: Upon every shm_lock call, we BUG_ON if an error was returned, indicating racing either in idr or in RMID. Move this logic into the locking. Signed-off-by: Davidlohr Bueso dbu...@suse.de --- ipc/shm.c | 11 +++ 1 file

Re: [PATCH 1/2 v2] ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits

2015-08-11 Thread Manfred Spraul
, CONFIG_SLAB_DEBUG and CONFIG_DEBUG_SPINLOCK, you can easily see something like the following in the kernel log: Signed-off-by: Herton R. Krzesinski her...@redhat.com Cc: sta...@vger.kernel.org Acked-by: Manfred Spraul manf...@colorfullife.com -- Manfred -- To unsubscribe from this list: send

Re: [PATCH 2/2] ipc,sem: remove uneeded sem_undo_list lock usage in exit_sem()

2015-08-11 Thread Manfred Spraul
-by: Herton R. Krzesinski her...@redhat.com Acked-by: Manfred Spraul manf...@colorfullife.com -- Manfred -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

[PATCH] ipc/sem.c: Update/correct memory barriers

2015-08-09 Thread Manfred Spraul
sem_wait_array() (i.e.: starting from 3.10). Andrew: Could you include it into your tree and forward it? Signed-off-by: Manfred Spraul manf...@colorfullife.com Reported-by: Oleg Nesterov o...@redhat.com Cc: sta...@vger.kernel.org --- ipc/sem.c | 18 ++ 1 file changed, 14 insertions

Re: [PATCH] ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits

2015-08-09 Thread Manfred Spraul
Hi Herton, On 08/07/2015 07:09 PM, Herton R. Krzesinski wrote: The current semaphore code allows a potential use after free: in exit_sem we may free the task's sem_undo_list while there is still another task looping through the same semaphore set and cleaning the sem_undo list at freeary

Re: [PATCH] ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits

2015-08-10 Thread Manfred Spraul
Hi Herton, On 08/10/2015 05:31 PM, Herton R. Krzesinski wrote: Well without the synchronize_rcu() and with the semid list loop fix I was still able to get issues, and I thought the problem is related to racing with IPC_RMID on freeary again. This is one scenario I would imagine:

Re: [PATCH] ipc/msg: Implement lockless pipelined wakeups

2015-10-31 Thread Manfred Spraul
-- Manfred /* * pmsg.cpp, parallel sysv msg pingpong * * Copyright (C) 1999, 2001, 2005, 2008 by Manfred Spraul. * All rights reserved except the rights granted by the GPL. * * Redistribution of this file is permitted under the terms of the GNU * General Public License (GPL) version 2 or l

Re: PROBLEM: Concurrency issue in sem_lock

2015-10-10 Thread Manfred Spraul
0:00:00 2001 From: Manfred Spraul <manf...@colorfullife.com> Date: Sat, 10 Oct 2015 08:37:22 +0200 Subject: [PATCH] ipc/sem.c: Alternative for fixing Concurrency bug Two ideas for fixing the bug found by Felix: - Revert my initial patch. Problem: Significant slowdown for application that use

[PATCH] ipc/sem.c: Fix complex_count vs. simple op race

2016-01-02 Thread Manfred Spraul
plex_count==1) - wakes up Thread B. - decrements complex_count Thread A: - does the complex_count test Bug: Now both thread A and thread C operate on the same array, without any synchronization. Reported-by: fel...@informatik.uni-bremen.de Signed-off-by: Manfred Spraul <manf...@colorfullife.c

Re: GPF in shm_lock ipc

2016-01-02 Thread Manfred Spraul
Hi Dmitry, shm locking differs too much from msg/sem locking, I never looked at it in depth, so I'm not able to perform a proper review. Except for the obvious: Races that can be triggered from user space are inacceptable. Regardless if there is a BUG_ON, a WARN_ON or nothing at all. On

Re: [PATCH, RESEND] ipc/shm: handle removed segments gracefully in shm_mmap()

2016-01-02 Thread Manfred Spraul
On 11/13/2015 08:23 PM, Davidlohr Bueso wrote: So considering EINVAL, even your approach to bumping up nattach by calling _shm_open earlier isn't enough. Races exposed to user called rmid can still occur between dropping the lock and doing ->mmap(). Ultimately this leads to all

Re: GPF in shm_lock ipc

2016-01-02 Thread Manfred Spraul
Hi Dmitry, On 01/02/2016 01:19 PM, Dmitry Vyukov wrote: On Sat, Jan 2, 2016 at 12:33 PM, Manfred Spraul <manf...@colorfullife.com> wrote: Hi Dmitry, shm locking differs too much from msg/sem locking, I never looked at it in depth, so I'm not able to perform a proper review.

Re: [PATCH] ipc/sem.c: Fix complex_count vs. simple op race

2016-01-04 Thread Manfred Spraul
On 01/04/2016 02:02 PM, Davidlohr Bueso wrote: On Sat, 02 Jan 2016, Manfred Spraul wrote: Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a race: sem_lock has a fast path that allows parallel simple operations. There are two reasons why a simple operation

Re: [PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race

2016-06-23 Thread Manfred Spraul
On 06/21/2016 01:04 AM, Andrew Morton wrote: On Sat, 18 Jun 2016 22:02:21 +0200 Manfred Spraul <manf...@colorfullife.com> wrote: Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a race: sem_lock has a fast path that allows parallel simple operations. There

Re: [PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race

2016-06-23 Thread Manfred Spraul
On 06/21/2016 02:30 AM, Davidlohr Bueso wrote: On Sat, 18 Jun 2016, Manfred Spraul wrote: diff --git a/include/linux/sem.h b/include/linux/sem.h index 976ce3a..d0efd6e 100644 --- a/include/linux/sem.h +++ b/include/linux/sem.h @@ -21,6 +21,7 @@ struct sem_array { struct list_head

Re: linux-next: manual merge of the akpm-current tree with the tip tree

2016-06-18 Thread Manfred Spraul
Hi, On 06/15/2016 07:23 AM, Stephen Rothwell wrote: Hi Andrew, Today's linux-next merge of the akpm-current tree got a conflict in: ipc/sem.c between commit: 33ac279677dc ("locking/barriers: Introduce smp_acquire__after_ctrl_dep()") from the tip tree and commit: a1c58ea067cb

[PATCH 2/2] ipc/sem: sem_lock with hysteresis

2016-06-18 Thread Manfred Spraul
lock to the per semaphore locks. This reduces how often the per-semaphore locks must be scanned. Passed stress testing with sem-scalebench. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- include/linux/sem.h | 2 +- ipc/sem.c

[PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race

2016-06-18 Thread Manfred Spraul
plex_count==1) - wakes up Thread B. - decrements complex_count Thread A: - does the complex_count test Bug: Now both thread A and thread C operate on the same array, without any synchronization. Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") Reported-by: fel...@in

[PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race

2016-06-25 Thread Manfred Spraul
xes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") Reported-by: fel...@informatik.uni-bremen.de Signed-off-by: Manfred Spraul <manf...@colorfullife.com> Cc: <sta...@vger.kernel.org> --- include/linux/sem.h | 1 + ipc/sem.c | 122 ++-

[PATCH 2/2] ipc/sem: sem_lock with hysteresis

2016-06-25 Thread Manfred Spraul
lock to the per semaphore locks. This reduces how often the per-semaphore locks must be scanned. Passed stress testing with sem-scalebench. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- include/linux/sem.h | 2 +- ipc/sem.c

[PATCH 0/2] ipc/sem.c: sem_lock fixes

2016-06-25 Thread Manfred Spraul
Hi Andrew, Hi Peter, next version of the sem_lock() fixes / improvement: The patches are now vs. tip. Patch 1 is ready for merging, patch 2 is new and for discussion. Patch 1 fixes the race that was found by Felix. It also adds smp_mb() to fully synchronize WRITE_ONCE(status, 1);

Re: [PATCH 2/2] ipc/sem: sem_lock with hysteresis

2016-06-25 Thread Manfred Spraul
On 06/21/2016 10:29 PM, Davidlohr Bueso wrote: On Sat, 18 Jun 2016, Manfred Spraul wrote: sysv sem has two lock modes: One with per-semaphore locks, one lock mode with a single big lock for the whole array. When switching from the per-semaphore locks to the big lock, all per-semaphore locks

Re: [PATCH] Don't set sempid in semctl syscall.

2016-02-26 Thread Manfred Spraul
Hi, On 02/26/2016 01:21 PM, PrasannaKumar Muralidharan wrote: From: PrasannaKumar Muralidharan As described in bug #112271 (bugzilla.kernel.org/show_bug.cgi?id=112271) don't set sempid in semctl syscall. Set sempid only when semop is called. I disagree with the

Re: [lkp] [ipc/msg] 0050ee059f: otc_kernel_qa-ts_ltp_ddt.LTP_syscalls.msgctl11.fail

2016-02-17 Thread Manfred Spraul
Hi Ying, On 02/14/2016 07:41 AM, kernel test robot wrote: FYI, we noticed the below changes on https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master commit 0050ee059f7fc86b1df2527aaa14ed5dc72f9973 ("ipc/msg: increase MSGMNI, remove scaling") LTP_syscalls: msgctl11: "Not

Re: sem_lock() vs qspinlocks

2016-05-21 Thread Manfred Spraul
On 05/21/2016 09:37 AM, Peter Zijlstra wrote: On Fri, May 20, 2016 at 05:48:39PM -0700, Davidlohr Bueso wrote: As opposed to spin_is_locked(), spin_unlock_wait() is perhaps more tempting to use for locking correctness. For example, taking a look at nf_conntrack_all_lock(), it too likes to get

Re: sem_lock() vs qspinlocks

2016-05-22 Thread Manfred Spraul
Hi Peter, On 05/20/2016 06:04 PM, Peter Zijlstra wrote: On Fri, May 20, 2016 at 05:21:49PM +0200, Peter Zijlstra wrote: Let me write a patch.. OK, something like the below then.. lemme go build that and verify that too fixes things. --- Subject: locking,qspinlock: Fix spin_is_locked() and

Re: [PATCH 0/2] ipc/sem.c: sem_lock fixes

2016-07-14 Thread Manfred Spraul
Hi Andrew, On 07/14/2016 12:05 AM, Andrew Morton wrote: On Wed, 13 Jul 2016 07:06:50 +0200 Manfred Spraul <manf...@colorfullife.com> wrote: Hi Andrew, Hi Peter, next version of the sem_lock() fixes: The patches are again vs. tip. Patch 1 is ready for merging, Patch 2 is for

[PATCH] ipc/sem.c: Fix complex_count vs. simple op race

2016-07-21 Thread Manfred Spraul
e16a ("ipc/sem.c: optimize sem_lock()") Reported-by: fel...@informatik.uni-bremen.de Signed-off-by: Manfred Spraul <manf...@colorfullife.com> Cc: <sta...@vger.kernel.org> --- include/linux/sem.h | 1 + ipc/sem.c | 138 +++---

Re: [PATCH 1/1 linux-next] ipc/msg.c: fix memory leak in do_msgsnd()

2016-07-31 Thread Manfred Spraul
Hi Fabian, On 07/29/2016 10:15 AM, Fabian Frederick wrote: Running LTP msgsnd06 with kmemleak gives the following: cat /sys/kernel/debug/kmemleak unreferenced object 0x88003c0a11f8 (size 8): comm "msgsnd06", pid 1645, jiffies 4294672526 (age 6.549s) hex dump (first 8 bytes): 1b

Re: spin_lock implicit/explicit memory barrier

2016-08-10 Thread Manfred Spraul
Hi, [adding Peter, correcting Davidlohr's mail address] On 08/10/2016 02:05 AM, Benjamin Herrenschmidt wrote: On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote: Hi Benjamin, Hi Michael, regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to arch_spin_is_locked()"): For t

Re: spin_lock implicit/explicit memory barrier

2016-08-12 Thread Manfred Spraul
Hi Boqun, On 08/12/2016 04:47 AM, Boqun Feng wrote: We should not be doing an smp_mb() right after a spin_lock(), makes no sense. The spinlock machinery should guarantee us the barriers in the unorthodox locking cases, such as this. Do we really want to go there? Trying to handle all

Re: [PATCH 2/2] ipc/sem.c: Remove duplicated memory barriers.

2016-07-13 Thread Manfred Spraul
Hi Davidlohr, On 07/13/2016 06:16 PM, Davidlohr Bueso wrote: Manfred, shouldn't this patch be part of patch 1 (as you add the unnecessary barriers there? Iow, can we have a single patch for all this? Two reasons: - patch 1 is safe for backporting, patch 2 not. - patch 1 is safe on all

[PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race

2016-07-12 Thread Manfred Spraul
w both thread A and thread C operate on the same array, without any synchronization. Full memory barrier are required to synchronize changes of complex_mode and the lock operations. Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") Reported-by: fel...@informatik.uni-bremen.de Signed-

[PATCH 2/2] ipc/sem.c: Remove duplicated memory barriers.

2016-07-12 Thread Manfred Spraul
SMP. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- ipc/sem.c | 14 -- 1 file changed, 14 deletions(-) diff --git a/ipc/sem.c b/ipc/sem.c index 0da63c8..d7b4212 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -291,14 +291,6 @@ static void complexmode_enter(struct sem

[PATCH 0/2] ipc/sem.c: sem_lock fixes

2016-07-12 Thread Manfred Spraul
Hi Andrew, Hi Peter, next version of the sem_lock() fixes: The patches are again vs. tip. Patch 1 is ready for merging, Patch 2 is for review. - Patch 1 is the patch as in -next since January It fixes the race that was found by Felix. - Patch 2 removes the memory barriers that are part of the

spin_lock implicit/explicit memory barrier

2016-08-09 Thread Manfred Spraul
Hi Benjamin, Hi Michael, regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to arch_spin_is_locked()"): For the ipc/sem code, I would like to replace the spin_is_locked() with a smp_load_acquire(), see: http://git.cmpxchg.org/cgit.cgi/linux-mmots.git/tree/ipc/sem.c#n367

Re: [PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race

2016-06-30 Thread Manfred Spraul
On 06/28/2016 07:27 AM, Davidlohr Bueso wrote: On Thu, 23 Jun 2016, Manfred Spraul wrote: What I'm not sure yet is if smp_load_acquire() is sufficient: Thread A: if (!READ_ONCE(sma->complex_mode)) { The code is test_and_test, no barrier requirements for first test Yeah, it wo

Re: spin_lock implicit/explicit memory barrier

2016-08-15 Thread Manfred Spraul
Hi Paul, On 08/10/2016 11:00 PM, Paul E. McKenney wrote: On Wed, Aug 10, 2016 at 12:17:57PM -0700, Davidlohr Bueso wrote: [...] CPU0 CPU1 complex_mode = truespin_lock(l) smp_mb() <--- do we want a smp_mb() here?

Re: [PATCH 8/7] net/netfilter/nf_conntrack_core: Remove another memory barrier

2016-09-02 Thread Manfred Spraul
On 09/02/2016 09:22 PM, Peter Zijlstra wrote: On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote: On 09/01/2016 06:41 PM, Peter Zijlstra wrote: On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote: On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote: Since

Re: [PATCH 8/7] net/netfilter/nf_conntrack_core: Remove another memory barrier

2016-09-05 Thread Manfred Spraul
Hi Peter, On 09/02/2016 09:22 PM, Peter Zijlstra wrote: On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote: On 09/01/2016 06:41 PM, Peter Zijlstra wrote: On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote: On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote

Re: [lkp] [ipc/sem.c] 99ac0dfffc: aim9.shared_memory.ops_per_sec -8.9% regression

2016-09-06 Thread Manfred Spraul
Hi, On 09/06/2016 08:42 AM, kernel test robot wrote: FYI, we noticed a -8.9% regression of aim9.shared_memory.ops_per_sec due to commit: commit 99ac0dfffcfb34326a880e90e06c30a2a882c692 ("ipc/sem.c: fix complex_count vs. simple op race")

Re: [PATCH 1/4] spinlock: Document memory barrier rules

2016-09-01 Thread Manfred Spraul
Hi, On 09/01/2016 10:44 AM, Peter Zijlstra wrote: On Wed, Aug 31, 2016 at 08:32:18PM +0200, Manfred Spraul wrote: On 08/31/2016 06:40 PM, Will Deacon wrote: The litmus test then looks a bit like: CPUm: LOCK(x) smp_mb(); RyAcq=0 CPUn: Wy=1 smp_mb(); UNLOCK_WAIT(x) Correct. which I think

Re: [PATCH 8/7] net/netfilter/nf_conntrack_core: Remove another memory barrier

2016-09-02 Thread Manfred Spraul
On 09/01/2016 06:41 PM, Peter Zijlstra wrote: On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote: On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote: Since spin_unlock_wait() is defined as equivalent to spin_lock(); spin_unlock(), the memory barrier before spin_unlock_wait

[PATCH 0/7 V6] Clarify/standardize memory barriers for lock/unlock

2016-09-01 Thread Manfred Spraul
Hi, Based on the new consensus: - spin_unlock_wait() is spin_lock();spin_unlock(); - no guarantees are provided by spin_is_locked(). - the acquire during spin_lock() is for the load, not for the store. Summary: If a high-scalability locking scheme is built with multiple spinlocks, then often

[PATCH 1/7] ipc/sem.c: Remove smp_rmb() from complexmode_enter()

2016-09-01 Thread Manfred Spraul
he smp_rmb() after spin_unlock_wait() can be removed. Not for stable! Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- ipc/sem.c | 8 1 file changed, 8 deletions(-) diff --git a/ipc/sem.c b/ipc/sem.c index 5e318c5..6586e0a 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -290,14

[PATCH 3/7] ipc/sem.c: Rely on spin_unlock_wait() = spin_lock();spin_unlock().

2016-09-01 Thread Manfred Spraul
>From memory ordering point of view, spin_unlock_wait() provides the same guarantees as spin_lock(); spin_unlock(). Therefore the smp_mb() after spin_lock() is not necessary, spin_unlock_wait() must provide the memory ordering. Signed-off-by: Manfred Spraul <manf...@colorfullife.c

[PATCH 6/7] net/netfilter/nf_conntrack_core: Remove barriers after spin_unlock_wait

2016-09-01 Thread Manfred Spraul
mb() after spin_unlock_wait() can be removed. Not for stable! Signed-off-by: Manfred Spraul <manf...@colorfullife.com> Cc: Pablo Neira Ayuso <pa...@netfilter.org> Cc: netfilter-de...@vger.kernel.org --- net/netfilter/nf_conntrack_core.c | 5 - 1 file changed, 5 deletions(-) diff

[PATCH 8/7] net/netfilter/nf_conntrack_core: Remove another memory barrier

2016-09-01 Thread Manfred Spraul
Since spin_unlock_wait() is defined as equivalent to spin_lock(); spin_unlock(), the memory barrier before spin_unlock_wait() is also not required. Not for stable! Signed-off-by: Manfred Spraul <manf...@colorfullife.com> Cc: Pablo Neira Ayuso <pa...@netfilter.org> Cc:

[PATCH 5/7] net/netfilter/nf_conntrack_core: Fix memory barriers.

2016-09-01 Thread Manfred Spraul
n it must be checked first if all updates to qspinlock were backported. Fixes: b16c29191dc8 Signed-off-by: Manfred Spraul <manf...@colorfullife.com> Cc: <sta...@vger.kernel.org> Cc: Sasha Levin <sasha.le...@oracle.com> Cc: Pablo Neira Ayuso <pa...@netfilter.org> Cc: netfil

[PATCH 7/7] net/netfilter/nf_conntrack_core: Remove smp_mb() after spin_lock().

2016-09-01 Thread Manfred Spraul
As spin_unlock_wait() is defined as equivalent to spin_lock(); spin_unlock(), the smp_mb() after spin_lock() is not required. Remove it. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- net/netfilter/nf_conntrack_core.c | 5 + 1 file changed, 1 insertion(+), 4 deletions(-)

[PATCH 2/7] spinlock: Document memory barrier rules for spin_lock and spin_unlock().

2016-09-01 Thread Manfred Spraul
? - spin_unlock_wait() is spin_lock()+spin_unlock(). - No memory ordering is enforced by spin_is_locked(). The patch adds this into Documentation/locking/spinlock.txt. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> Cc: Will Deacon <will.dea...@arm.com> --- Documentation/locking/spinl

[PATCH 4/7] spinlock.h: Move smp_mb__after_unlock_lock to spinlock.h

2016-09-01 Thread Manfred Spraul
a full memory barrier: (everything initialized to 0) CPU1: a=1; spin_unlock(); spin_lock(); + smp_mb__after_unlock_lock(); r1=d; CPU2: d=1; smp_mb(); r2=a; Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would be possible. Signed-off-by: Manfred Spra

[PATCH 9/7] ipc/sem.c: Remove another memory barrier.

2016-09-01 Thread Manfred Spraul
As spin_unlock_wait() is defined as equivalent to spin_lock(); spin_unlock(), the memory barrier before spin_unlock_wait() is not required. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- ipc/sem.c | 6 +- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/ipc/s

Re: [PATCH 1/4] spinlock: Document memory barrier rules

2016-08-31 Thread Manfred Spraul
On 08/31/2016 06:40 PM, Will Deacon wrote: I'm struggling with this example. We have these locks: >lock >sem_base[0...sma->sem_nsems].lock >sem_perm.lock a condition variable: sma->complex_mode and a new barrier: smp_mb__after_spin_lock() For simplicity, we can make

Re: [PATCH 1/4] spinlock: Document memory barrier rules

2016-08-30 Thread Manfred Spraul
On 08/29/2016 03:44 PM, Peter Zijlstra wrote: If you add a barrier, the Changelog had better be clear. And I'm still not entirely sure I get what exactly this barrier should do, nor why it defaults to a full smp_mb. If what I suspect it should do, only PPC and ARM64 need the barrier. The

[PATCH 4/5] spinlock.h: Move smp_mb__after_unlock_lock to spinlock.h

2016-08-31 Thread Manfred Spraul
barrier: (everything initialized to 0) CPU1: a=1; spin_unlock(); spin_lock(); + smp_mb__after_unlock_lock(); r1=d; CPU2: d=1; smp_mb(); r2=a; Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would be possible. Signed-off-by: Manfred Spraul <manf...@colorfullife.com&

[PATCH 2/5] spinlock: Document memory barrier rules for spin_lock and spin_unlock().

2016-08-31 Thread Manfred Spraul
? - spin_unlock_wait() is an ACQUIRE. - No memory ordering is enforced by spin_is_locked(). The patch adds this into Documentation/locking/spinlock.txt. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- Documentation/locking/spinlocks.txt | 9 + 1 file changed, 9 insertions(+) diff

[PATCH 0/5 V5] Clarify/standardize memory barriers for lock/unlock

2016-08-31 Thread Manfred Spraul
Hi, V5: Major restructuring based on input from Peter and Davidlohr. As discussed before: If a high-scalability locking scheme is built with multiple spinlocks, then often additional memory barriers are required. The documentation was not as clear as possible, and memory barriers were missing /

[PATCH 5/5] net/netfilter/nf_conntrack_core: update memory barriers.

2016-08-31 Thread Manfred Spraul
k) instead of spin_unlock_wait(_lock) and loop backward. - use smp_store_mb() instead of a raw smp_mb() Signed-off-by: Manfred Spraul <manf...@colorfullife.com> Cc: Pablo Neira Ayuso <pa...@netfilter.org> Cc: netfilter-de...@vger.kernel.org --- Question: Should I split this patch? First a patch that

[PATCH 1/5] ipc/sem.c: Remove smp_rmb() from complexmode_enter()

2016-08-31 Thread Manfred Spraul
mb() after spin_unlock_wait() can be removed. Not for stable! Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- ipc/sem.c | 8 1 file changed, 8 deletions(-) diff --git a/ipc/sem.c b/ipc/sem.c index 5e318c5..6586e0a 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -290,14 +290,

[PATCH 3/5] spinlock: define spinlock_store_acquire

2016-08-31 Thread Manfred Spraul
nverts ipc/sem.c to the new define. For overriding, the same approach as for smp_mb__before_spin_lock() is used: If smp_mb__after_spin_lock is already defined, then it is not changed. The default is smp_mb(), to ensure that no architecture gets broken. Signed-off-by: Manfred Spraul <manf...@col

Re: [PATCH 2/5] ipc/sem: rework task wakeups

2016-09-13 Thread Manfred Spraul
Hi Davidlohr, On 09/12/2016 01:53 PM, Davidlohr Bueso wrote: Hmeansembench-sem-482965735.00 ( 0.00%) 1040313.00 ( 7.72%) [...] Signed-off-by: Davidlohr Bueso --- ipc/sem.c | 268 +++--- 1 file changed, 83

Re: [PATCH 3/5] ipc/sem: optimize perform_atomic_semop()

2016-09-12 Thread Manfred Spraul
Hi Davidlohr, On 09/12/2016 01:53 PM, Davidlohr Bueso wrote: This is the main workhorse that deals with semop user calls such that the waitforzero or semval update operations, on the set, can complete on not as the sma currently stands. Currently, the set is iterated twice (setting semval, then

Re: [PATCH 1/5] ipc/sem: do not call wake_sem_queue_do() prematurely

2016-09-12 Thread Manfred Spraul
Hi Davidlohr, On 09/12/2016 01:53 PM, Davidlohr Bueso wrote: ... as this call should obviously be paired with its _prepare() counterpart. At least whenever possible, as there is no harm in calling it bogusly as we do now in a few places. I would define the interface differently: WAKE_Q creates

Re: [lkp] [ipc/sem.c] 0882cba0a0: aim9.shared_memory.ops_per_sec -8.8% regression

2016-10-09 Thread Manfred Spraul
Hi, On 10/09/2016 09:05 AM, kernel test robot wrote: FYI, we noticed a -8.8% regression of aim9.shared_memory.ops_per_sec due to commit: commit 0882cba0a03bca73acd8fab8fb50db04691908e9 ("ipc/sem.c: fix complex_count vs. simple op race")

Re: [PATCH 2/5] ipc/sem: rework task wakeups

2016-09-18 Thread Manfred Spraul
Hi Davidlohr, On 09/12/2016 01:53 PM, Davidlohr Bueso wrote: @@ -1933,22 +1823,32 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops, queue.alter = alter; error = perform_atomic_semop(sma, ); - if (error == 0) { - /* If the operation was

Re: [PATCH 5/5] ipc/sem: use proper list api for pending_list wakeups

2016-09-18 Thread Manfred Spraul
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote: ... saves some LoC and looks cleaner than re-implementing the calls. Signed-off-by: Davidlohr Bueso <dbu...@suse.de> Acked-by: Manfred Spraul <manf...@colorfullife.com> -- Manfred

Re: [PATCH 4/5] ipc/msg: Lockless security checks for msgsnd

2016-09-17 Thread Manfred Spraul
Hi Davidlohr, Just as with msgrcv (along with the rest of sysvipc since a few years ago), perform the security checks without holding the ipc object lock. Thinking about it: isn't this wrong? CPU1: * msgrcv() * ipcperms() CPU2: * msgctl(), change permissions ** msgctl() returns, new

Re: [PATCH 3/5] ipc/sem: optimize perform_atomic_semop()

2016-09-18 Thread Manfred Spraul
about the attached dup detection? -- Manfred >From 140340a358dbf66b3bc6f848ca9b860e3e957e84 Mon Sep 17 00:00:00 2001 From: Manfred Spraul <manf...@colorfullife.com> Date: Mon, 19 Sep 2016 06:25:20 +0200 Subject: [PATCH] ipc/sem: Update duplicate sop detection The duplicated sop detection can be improved: - use uint64_t

Re: [PATCH 2/5] ipc/sem: rework task wakeups

2016-09-19 Thread Manfred Spraul
sembench-sem-482965735.00 ( 0.00%) 1040313.00 ( 7.72%) Signed-off-by: Davidlohr Bueso <dbu...@suse.de> Acked-by: Manfred Spraul <manf...@colorfullife.com> -- Manfred

Re: [PATCH 4/5] ipc/msg: Lockless security checks for msgsnd

2016-09-22 Thread Manfred Spraul
On 09/22/2016 12:21 AM, Davidlohr Bueso wrote: On Sun, 18 Sep 2016, Manfred Spraul wrote: Just as with msgrcv (along with the rest of sysvipc since a few years ago), perform the security checks without holding the ipc object lock. Thinking about it: isn't this wrong? CPU1: * msgrcv

Re: [PATCH -next v2 0/5] ipc/sem: semop(2) improvements

2016-09-19 Thread Manfred Spraul
On 09/18/2016 09:11 PM, Davidlohr Bueso wrote: Changes from v1 (https://lkml.org/lkml/2016/9/12/266) - Got rid of the signal_pending check in wakeup fastpath. (patch 2) - Added read/access once to queue.status (we're obviously concerned about lockless access upon unrelated events, even if on

[PATCH 3/4] net/netfilter/nf_conntrack_core: update memory barriers.

2016-08-28 Thread Manfred Spraul
change avoids that nf_conntrack_lock() could loop multiple times. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- net/netfilter/nf_conntrack_core.c | 36 ++-- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/net/net

[PATCH 2/4] barrier.h: Move smp_mb__after_unlock_lock to barrier.h

2016-08-28 Thread Manfred Spraul
spin_unlock() + spin_lock() together do not form a full memory barrier: a=1; spin_unlock(); spin_lock(); + smp_mb__after_unlock_lock(); d=1; Without the smp_mb__after_unlock_lock(), other CPUs can observe the write to d without seeing the write to a. Signed-off-by: Manfred Spraul <m

[PATCH 0/4] Clarify/standardize memory barriers for lock/unlock

2016-08-28 Thread Manfred Spraul
Hi, as discussed before: If a high-scalability locking scheme is built with multiple spinlocks, then often additional memory barriers are required. The documentation was not as clear as possible, and memory barriers were missing / superfluous in the implementation. Patch 1: Documentation,

[PATCH 4/4] qspinlock for x86: smp_mb__after_spin_lock() is free

2016-08-28 Thread Manfred Spraul
queued_spin_unlock_wait for details. As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used in any hotpaths, the patch does not create that define yet. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- arch/x86/include/asm/qspinlock.h | 11 +++ 1 file chang

[PATCH 1/4] spinlock: Document memory barrier rules

2016-08-28 Thread Manfred Spraul
(), that is part of spin_unlock_wait() - smp_mb__after_spin_lock() instead of a direct smp_mb(). Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- Documentation/locking/spinlocks.txt | 5 + include/linux/spinlock.h| 12 ipc

[PATCH 2/4] barrier.h: Move smp_mb__after_unlock_lock to barrier.h

2016-08-28 Thread Manfred Spraul
possible. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- include/asm-generic/barrier.h | 16 kernel/rcu/tree.h | 12 2 files changed, 16 insertions(+), 12 deletions(-) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/b

[PATCH 2/4 v3] spinlock.h: Move smp_mb__after_unlock_lock to spinlock.h

2016-08-28 Thread Manfred Spraul
barrier: (everything initialized to 0) CPU1: a=1; spin_unlock(); spin_lock(); + smp_mb__after_unlock_lock(); r1=d; CPU2: d=1; smp_mb(); r2=a; Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would be possible. Signed-off-by: Manfred Spraul <manf...@colorfullife.com>

Re: [PATCH 2/4] barrier.h: Move smp_mb__after_unlock_lock to barrier.h

2016-08-28 Thread Manfred Spraul
On 08/28/2016 03:43 PM, Paul E. McKenney wrote: Without the smp_mb__after_unlock_lock(), other CPUs can observe the write to d without seeing the write to a. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> With the upgraded commit log, I am OK with the patch below. Done. H

Re: [PATCH 3.14 17/29] sysv, ipc: fix security-layer leaking

2016-08-29 Thread Manfred Spraul
kmemleak_alloc+0x23/0x40 kmem_cache_alloc_trace+0xe1/0x180 selinux_msg_queue_alloc_security+0x3f/0xd0 security_msg_queue_alloc+0x2e/0x40 newque+0x4e/0x150 ipcget+0x159/0x1b0 SyS_msgget+0x39/0x40 entry_SYSCALL_64_fastpath+0x13/0x8f Manfred Spraul suggested to fix s

[PATCH 3/4 V4] net/netfilter/nf_conntrack_core: update memory barriers.

2016-08-29 Thread Manfred Spraul
change avoids that nf_conntrack_lock() could loop multiple times. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- net/netfilter/nf_conntrack_core.c | 36 ++-- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/net/net

[PATCH 1/4 v4] spinlock: Document memory barrier rules

2016-08-29 Thread Manfred Spraul
s override it with a less expensive barrier if this is sufficient for their hardware/spinlock implementation. For overriding, the same approach as for smp_mb__before_spin_lock() is used: If smp_mb__after_spin_lock is already defined, then it is not changed. Signed-off-by: Manfred Spraul <manf...@col

[PATCH 2/4 V4] spinlock.h: Move smp_mb__after_unlock_lock to spinlock.h

2016-08-29 Thread Manfred Spraul
barrier: (everything initialized to 0) CPU1: a=1; spin_unlock(); spin_lock(); + smp_mb__after_unlock_lock(); r1=d; CPU2: d=1; smp_mb(); r2=a; Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would be possible. Signed-off-by: Manfred Spraul <manf...@colorfullife.com>

[PATCH 0/4 V4] Clarify/standardize memory barriers for lock/unlock

2016-08-29 Thread Manfred Spraul
Hi, V4: Docu/comment improvements, remove unnecessary barrier for x86. V3: Bugfix for arm64 V2: Include updated documentation for rcutree patch As discussed before: If a high-scalability locking scheme is built with multiple spinlocks, then often additional memory barriers are required. The

[PATCH 4/4 V4] qspinlock for x86: smp_mb__after_spin_lock() is free

2016-08-29 Thread Manfred Spraul
queued_spin_unlock_wait for details. As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used in any hotpaths, the patch does not create that define yet. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- arch/x86/include/asm/qspinlock.h | 11 +++ 1 file chang

Re: [PATCH 1/4] spinlock: Document memory barrier rules

2016-08-29 Thread Manfred Spraul
Hi Peter, On 08/29/2016 12:48 PM, Peter Zijlstra wrote: On Sun, Aug 28, 2016 at 01:56:13PM +0200, Manfred Spraul wrote: Right now, the spinlock machinery tries to guarantee barriers even for unorthodox locking cases, which ends up as a constant stream of updates as the architectures try

<    1   2   3   4   5   6   7   8   9   10   >