hael S. Tsirkin"
Cc: Jason Wang
Cc: Doug Ledford
Cc: Christian Benvenuti
Cc: linux-r...@vger.kernel.org
Signed-off-by: Davidlohr Bueso
---
This is part of the rbtree internal caching series:
https://marc.info/?l=linux-kernel&m=149611025616685
drivers/gpu/drm/amd/amdgpu/amdg
interval_tree.h _is_ the generic flavor.
Signed-off-by: Davidlohr Bueso
---
include/linux/interval_tree_generic.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/interval_tree_generic.h
b/include/linux/interval_tree_generic.h
index f096423c8cbd..1f97ce26
... such that we can avoid the tree walks to get the
node with the smallest key. Semantically the same,
as the previously used rb_first(), but O(1).
Signed-off-by: Davidlohr Bueso
---
fs/proc/generic.c | 26 ++
fs/proc/internal.h | 2 +-
fs/proc/proc_net.c | 2 +-
fs
... with the generic rbtree flavor instead. No changes
in semantics whatsoever.
Signed-off-by: Davidlohr Bueso
---
kernel/sched/deadline.c | 50 +++--
kernel/sched/sched.h| 6 ++
2 files changed, 21 insertions(+), 35 deletions(-)
diff --git
... such that we can avoid the tree walks to get the
node with the smallest key. Semantically the same,
as the previously used rb_first(), but O(1). The
main overhead is the extra footprint for the cached
rb_node pointer, which should not matter for epoll.
Signed-off-by: Davidlohr Bueso
---
fs
... with the generic rbtree flavor instead. No changes
in semantics whatsoever.
Signed-off-by: Davidlohr Bueso
---
include/linux/init_task.h | 5 ++---
include/linux/rtmutex.h | 9 -
include/linux/sched.h | 3 +--
kernel/fork.c | 3
semantics for users that
don't care about the optimization requires zero overhead.
Signed-off-by: Davidlohr Bueso
---
include/linux/rbtree.h | 11 +++
include/linux/rbtree_augmented.h | 33 ++---
lib/rbtree.c
Ingo, Peter if you have no objections, could you please pick up
patches 3 & 4, which really have nothing to do with this series
other than independent fixes.
Thanks,
Davidlohr
On Thu, 08 Jun 2017, Peter Zijlstra wrote:
On Mon, May 29, 2017 at 07:09:36PM -0700, Davidlohr Bueso wrote:
static __always_inline struct rb_node *
__rb_erase_augmented(struct rb_node *node, struct rb_root *root,
+bool cached, struct rb_node **leftmost
ping?
On Wed, 24 May 2017, Laurent Dufour wrote:
A new configuration variable is introduced to activate the use of
range lock instead of semaphore to protect per process memory layout.
This range lock is replacing the use of a semaphore for mmap_sem.
Currently only available for X86_64 and PPC64 arc
On Wed, 24 May 2017, Laurent Dufour wrote:
The range locking framework doesn't yet provide nest locking
operation.
Once the range locking API while provide nested operation support,
this patch will have to be reviewed.
Please note that we already have range_write_lock_nest_lock().
Thanks,
Da
Hi Laurent!
On Wed, 24 May 2017, Laurent Dufour wrote:
When mmap_sem will be moved to a range lock, some assertion done in
the code will have to be reviewed to work with the range locking as
well.
This patch disables these assertions for the moment but it has be
reviewed later once the range l
On Tue, 30 May 2017, Kees Cook wrote:
A new patch has been added at the start of this series to make the default
refcount_t implementation just use an unchecked atomic_t implementation,
since many kernel subsystems want to be able to opt out of the full
validation, since it includes a small perf
On Tue, 30 May 2017, kbuild test robot wrote:
Hi Davidlohr,
[auto build test ERROR on next-20170529]
url:
https://github.com/0day-ci/linux/commits/Davidlohr-Bueso/rbtree-Cache-leftmost-node-internally/20170530-101713
config: x86_64-allmodconfig (attached as .config)
compiler: gcc-6
... with the generic rbtree flavor instead. No changes
in semantics whatsoever.
Signed-off-by: Davidlohr Bueso
---
include/linux/init_task.h | 5 ++---
include/linux/rtmutex.h | 9 -
include/linux/sched.h | 3 +--
kernel/fork.c | 3
... with the generic rbtree flavor instead. No changes
in semantics whatsoever.
Signed-off-by: Davidlohr Bueso
---
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 35 +++
kernel/sched/sched.h | 3 +--
3 files changed, 13 insertions(+), 27 deletions(-)
diff
... with the generic rbtree flavor instead. No changes
in semantics whatsoever.
Signed-off-by: Davidlohr Bueso
---
kernel/sched/deadline.c | 50 +++--
kernel/sched/sched.h| 6 ++
2 files changed, 21 insertions(+), 35 deletions(-)
diff --git
sert_after().
Signed-off-by: Davidlohr Bueso
---
drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 8 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 7 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 2 +-
drivers/gpu/drm/drm_mm.c | 10 ++---
drive
semantics for users that
don't care about the optimization requires zero overhead.
Signed-off-by: Davidlohr Bueso
---
include/linux/rbtree.h | 11 +++
include/linux/rbtree_augmented.h | 33 ++---
lib/rbtree.c
//lkml.org/lkml/2017/5/16/676
Davidlohr Bueso (5):
rbtree: Cache leftmost node internally
sched/fair: Replace cfs_rq->rb_leftmost
locking/rtmutex: Replace top-waiter and pi_waiters leftmost caching
sched/deadline: Replace earliest deadline and runqueue leftmost caching
lib/interval_
As of:
bf3eac84c42 (percpu-rwsem: kill CONFIG_PERCPU_RWSEM)
we unconditionally build pcpu-rwsems. Remove a leftover
in for FILE_LOCKING.
Signed-off-by: Davidlohr Bueso
---
fs/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/Kconfig b/fs/Kconfig
index b0e42b6a96b9..7aee6d699fd6
Allows for more flexible debugging.
Signed-off-by: Davidlohr Bueso
---
lib/interval_tree_test.c | 57 +---
1 file changed, 40 insertions(+), 17 deletions(-)
diff --git a/lib/interval_tree_test.c b/lib/interval_tree_test.c
index 245900b98c8e
... such that a user can specify visiting all the nodes
in the tree (intersects with the world). This is a nice
opposite from the very basic default query which is a
single point.
Signed-off-by: Davidlohr Bueso
---
lib/interval_tree_test.c | 15 ++-
1 file changed, 10 insertions
Add a 'max_endpoint' parameter such that users may easily
limit the size of the intervals that are randomly generated.
Signed-off-by: Davidlohr Bueso
---
lib/interval_tree_test.c | 23 +--
1 file changed, 13 insertions(+), 10 deletions(-)
diff -
Hi,
Here are some patches that update the interva_tree_test module allowing
users to pass finer grained options to run the actual test.
Applies on top of v4.12-rc1.
Thanks!
Davidlohr Bueso (4):
lib/interval_tree_test: allow the module to be compiled-in
lib/interval_tree_test: make test
It is a tristate after all, and also serves well for quick debugging.
Signed-off-by: Davidlohr Bueso
---
lib/Kconfig.debug | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index e4587ebe52c7..b29bf26e653c 100644
--- a/lib/Kconfig.debug
On Mon, 15 May 2017, Peter Zijlstra wrote:
On Mon, May 15, 2017 at 02:07:21AM -0700, Davidlohr Bueso wrote:
+ * Fairness and freedom of starvation are guaranteed by the lack of lock
+ * stealing, thus range locks depend directly on interval tree semantics.
+ * This is particularly for
On Mon, 15 May 2017, Peter Zijlstra wrote:
Nearly every range_interval_tree_foreach() usage has a
__range_intersects_intree() in front, suggesting our
range_interval_tree_foreach() is 'broken'.
I suppose the only question is if we should fix
range_interval_tree_foreach() or interval_tree_iter_f
On Thu, 20 Apr 2017, Peter Zijlstra wrote:
For opt spinning we need to specifically know who would be next in
order, again, doesn't matter how many, just who's next.
I've sent a v3 with a more precise description of this, which I hope is
to your satisfaction.
Given a clear tree iteration/orde
Things can explode for locktorture if the user does combinations
of nwriters_stress=0 nreaders_stress=0. Fix this by not assuming
we always want to torture writer threads.
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 76 +---
1 file
This replaces the in-house version, which is also derived
from Jan's interval tree implementation.
Cc: oleg.dro...@intel.com
Cc: andreas.dil...@intel.com
Cc: jsimm...@infradead.org
Cc: lustre-de...@lists.lustre.org
Signed-off-by: Davidlohr Bueso
---
drivers/staging/lustre/lustre/llite/Mak
Torture the reader/writer range locks. Each thread will attempt to
lock+unlock a range of up to [0, 4096].
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 221 +--
1 file changed, 172 insertions(+), 49 deletions(-)
diff --git a/kernel
testing the new lock and actually
makes use of it for lustre. It has passed quite a bit of artificial pounding and
I believe/hope it is in shape to consider.
Applies on top of tip v4.12-rc1
[1] https://lkml.org/lkml/2013/1/31/483
Thanks!
Davidlohr Bueso (6):
interval-tree: Build uncon
We should account for nreader threads, not writers in this
callback. Could even trigger a div by 0 if the user explicitly
disables writers.
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking
In preparation for range locking, this patch gets rid of
CONFIG_INTERVAL_TREE option as we will unconditionally
build it.
Signed-off-by: Davidlohr Bueso
---
drivers/gpu/drm/Kconfig | 2 --
drivers/gpu/drm/i915/Kconfig | 1 -
lib/Kconfig | 14 --
lib
conversions,
but not enough to matter in the overall picture.
Signed-off-by: Davidlohr Bueso
Reviewed-by: Jan Kara
---
include/linux/lockdep.h | 33 +++
include/linux/range_lock.h | 181
kernel/locking/Makefile | 2 +-
kernel/locking/ra
thus the whole filesystem.
[1] https://www.spinics.net/lists/linux-ext4/msg56238.html
Fixes: b685d3d65ac (block: treat REQ_FUA and REQ_PREFLUSH as synchronous)
Cc: stable
Cc: Jan Kara
Signed-off-by: Davidlohr Bueso
---
fs/btrfs/disk-io.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -
On Thu, 20 Apr 2017, Peter Zijlstra wrote:
On Thu, Apr 20, 2017 at 10:13:26AM -0700, Davidlohr Bueso wrote:
I have thought of some heuristics for avoiding sleeping under certain
constraints, which could mitigate the spinning step we loose, but I fear it
will never be exactly as fast as rwsems
On Wed, 19 Apr 2017, Peter Zijlstra wrote:
- explain why the loss of lock stealing makes sense. IIRC walken added
that specifically to address mmap_sem performance issues.
That's right, and the same applies to the writer spinning stuff; which
can makes a huge difference - more so than plai
On Tue, 18 Apr 2017, Laurent Dufour wrote:
On 06/04/2017 10:46, Davidlohr Bueso wrote:
+__range_read_lock_common(struct range_rwlock_tree *tree,
+struct range_rwlock *lock, long state)
+{
+ struct interval_tree_node *node;
+ unsigned long flags
On Thu, 30 Mar 2017, Michal Hocko wrote:
On Wed 29-03-17 10:45:14, Andi Kleen wrote:
On Wed, Mar 29, 2017 at 10:06:25AM +0200, Michal Hocko wrote:
>
> Do we actually have any users?
Yes this feature is widely used.
Considering that none of SHM_HUGE* has been exported to the userspace
headers
On Thu, 06 Apr 2017, Laurent Dufour wrote:
How is 'seqnum' wrapping handled here ?
I'd rather see something like time_before() here, isn't it ?
Its a 64bit counter, no overflows.
This replaces the in-house version, which is also derived
from Jan's interval tree implementation.
Cc: oleg.dro...@intel.com
Cc: andreas.dil...@intel.com
Cc: jsimm...@infradead.org
Cc: lustre-de...@lists.lustre.org
Signed-off-by: Davidlohr Bueso
---
drivers/staging/lustre/lustre/llite/Mak
In preparation for range locking, this patch gets rid of
CONFIG_INTERVAL_TREE option as we will unconditionally
build it.
Signed-off-by: Davidlohr Bueso
---
drivers/gpu/drm/Kconfig | 2 --
drivers/gpu/drm/i915/Kconfig | 1 -
lib/Kconfig | 14 --
lib
p of tip v4.11-rc5
[1] https://lkml.org/lkml/2013/1/31/483
Thanks!
Davidlohr Bueso (6):
interval-tree: Build unconditionally
locking: Introduce range reader/writer lock
locking/locktorture: Fix rwsem reader_delay
locking/locktorture: Fix num reader/writer corner cases
locking/lock
Things can explode for locktorture if the user does combinations
of nwriters_stress=0 nreaders_stress=0. Fix this by not assuming
we always want to torture writer threads.
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 76 +---
1 file
We should account for nreader threads, not writers in this
callback. Could even trigger a div by 0 if the user explicitly
disables writers.
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking
allelism; which is no surprise either. As such microbenchmarks that merely
pounds on a lock will pretty much always suffer upon direct lock conversions,
but not enough to matter in the overall picture.
Signed-off-by: Davidlohr Bueso
Reviewed-by: Jan Kara
---
include/linux/range_rwlock.h | 115
Torture the reader/writer range locks. Each thread will attempt to
lock+unlock a range of up to [0, 4096].
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 221 +--
1 file changed, 172 insertions(+), 49 deletions(-)
diff --git a/kernel
On Mon, 03 Apr 2017, Jan Kara wrote:
Or just a plain sequence counter of the lock operations?
So what I dislike about this is that we'd also have to enlarge
the struct range_rwlock_tree. otoh, I'm hesitant to depend on
the tick rate for lock correctness, so perhaps your suggestion
is best.
Th
On Mon, 03 Apr 2017, Laurent Dufour wrote:
Le Tue, 28 Mar 2017 09:39:18 -0700,
Davidlohr Bueso a écrit :
I'll wait to see if there are any more concerns and send a v2 with
your corrections.
Hi Bavidlohr, I think there is a major issue regarding the task
catching a signal in wait_for_
As of:
bf3eac84c42 (percpu-rwsem: kill CONFIG_PERCPU_RWSEM)
we unconditionally build pcpu-rwsems. Remove a leftover
in for FILE_LOCKING.
Signed-off-by: Davidlohr Bueso
---
fs/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/Kconfig b/fs/Kconfig
index 83eab52fb3f6..3f0e069ec682
On 2017-03-30 07:56, Laurent Dufour wrote:
On 07/03/2017 06:03, Davidlohr Bueso wrote:
+static inline int wait_for_ranges(struct range_rwlock_tree *tree,
+ struct range_rwlock *lock, long state)
+{
+ int ret = 0;
+
+ while (true
On Wed, 29 Mar 2017, Kirill A. Shutemov wrote:
On Wed, Mar 29, 2017 at 08:31:33AM -0700, Davidlohr Bueso wrote:
On Wed, 29 Mar 2017, Laurent Dufour wrote:
> On 28/03/2017 18:58, Kirill A. Shutemov wrote:
> > On Tue, Mar 28, 2017 at 09:39:18AM -0700, Davidlohr Bueso wrote:
> >
On Wed, 29 Mar 2017, Laurent Dufour wrote:
On 28/03/2017 18:58, Kirill A. Shutemov wrote:
On Tue, Mar 28, 2017 at 09:39:18AM -0700, Davidlohr Bueso wrote:
I'll wait to see if there are any more concerns and send a v2 with your
corrections.
Have you tried drop-in replacement of mma
On Wed, 29 Mar 2017, Peter Zijlstra wrote:
On Mon, Mar 06, 2017 at 09:03:26PM -0800, Davidlohr Bueso wrote:
+static __always_inline int
+__range_read_lock_common(struct range_rwlock_tree *tree,
+struct range_rwlock *lock, long state)
+{
+ struct interval_tree_node
On Wed, 29 Mar 2017, Peter Zijlstra wrote:
On Mon, Mar 06, 2017 at 09:03:26PM -0800, Davidlohr Bueso wrote:
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 88e01e08e279..e4d9eadd2c47 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -154,7 +154,6
On Wed, 29 Mar 2017, Peter Zijlstra wrote:
On Mon, Mar 06, 2017 at 09:03:26PM -0800, Davidlohr Bueso wrote:
+#define RANGE_RWLOCK_INFINITY (~0UL - 1)
+#define DEFINE_RANGE_RWLOCK_INF(name) \
+ struct range_rwlock name = __RANGE_RWLOCK_INITIALIZER(0,
RANGE_RWLOCK_INFINITY
Sorry, forgot to add Anshuman.
On Tue, 28 Mar 2017, Davidlohr Bueso wrote:
Do we have any consensus here? Keeping SHM_HUGE_* is currently
winning 2-1. If there are in fact users out there computing the
value manually, then I am ok with keeping it and properly exporting
it. Michal?
Thanks
Do we have any consensus here? Keeping SHM_HUGE_* is currently
winning 2-1. If there are in fact users out there computing the
value manually, then I am ok with keeping it and properly exporting
it. Michal?
Thanks,
Davidlohr
On Tue, 28 Mar 2017, Laurent Dufour wrote:
+#define __RANGE_RWLOCK_INITIALIZER(start, last) { \
+ .node = { \
+ .start = (start)\
+ ,.last = (last) \
+ }
just rip out the whole thing --
the shmget.2 manpage will need updating, as it should not be
describing kernel internals.
Signed-off-by: Davidlohr Bueso
---
include/linux/shm.h| 13 -
ipc/shm.c | 6 +++---
mm/mmap.c
On Tue, 07 Mar 2017, Oleg Drokin wrote:
On Mar 7, 2017, at 12:03 AM, Davidlohr Bueso wrote:
This replaces the in-house version, which is also derived
from Jan's interval tree implementation.
Cc: oleg.dro...@intel.com
Cc: andreas.dil...@intel.com
Cc: jsimm...@infradead.org
Cc: lust
We should account for nreader threads, not writers in this
callback. Could even trigger a div by 0 if the user explicitly
disables writers.
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking
is no surprise either. As such microbenchmarks that merely
pounds on a lock will pretty much always suffer upon direct lock conversions,
but not enough to matter in the overall picture.
Signed-off-by: Davidlohr Bueso
---
drivers/gpu/drm/Kconfig | 2 -
drivers/gpu/drm/i915/Kc
Things can explode for locktorture if the user does combinations
of nwriters_stress=0 nreaders_stress=0. Fix this by not assuming
we always want to torture writer threads.
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 76 +---
1 file
op of v4.11-rc1.
[1] https://lkml.org/lkml/2013/1/31/483
Thanks!
Davidlohr Bueso (5):
locking: Introduce range reader/writer lock
locking/locktorture: Fix rwsem reader_delay
locking/locktorture: Fix num reader/writer corner cases
locking/locktorture: Support range rwlocks
staging/lust
Torture the reader/writer range locks. Each thread will attempt to
lock+unlock a range of up to [0, 4096].
Signed-off-by: Davidlohr Bueso
---
kernel/locking/locktorture.c | 221 +--
1 file changed, 172 insertions(+), 49 deletions(-)
diff --git a/kernel
This replaces the in-house version, which is also derived
from Jan's interval tree implementation.
Cc: oleg.dro...@intel.com
Cc: andreas.dil...@intel.com
Cc: jsimm...@infradead.org
Cc: lustre-de...@lists.lustre.org
Signed-off-by: Davidlohr Bueso
---
XXX: compile tested only. In house
On Wed, 22 Feb 2017, Waiman Long wrote:
Move the rwsem_down_read_failed() function down to below the
optimistic spinning section as it is going to use function in that
section in a later patch.
So the title is a bit ambiguous, and I would argue that this
should be folded into patch 3, and just
On Wed, 22 Feb 2017, Waiman Long wrote:
We can safely check the wait_list to see if waiters are present without
lock when there are spinners to fall back on in case we miss a waiter.
The advantage is that we can save a pair of spin_lock/unlock calls
when the wait_list is empty. This translates t
On Wed, 22 Feb 2017, Waiman Long wrote:
On a 2-socket 36-core 72-thread x86-64 E5-2699 v3 system, a rwsem
microbenchmark was run with 36 locking threads (one/core) doing 100k
reader and writer lock/unlock operations each, the resulting locking
rates (avg of 3 runs) on a 4.10 kernel were 561.4 Mo
On Mon, 20 Feb 2017, Michal Hocko wrote:
I am not sure I understand.
$ git grep SHM_HUGE_ include/uapi/
$
So there doesn't seem to be any user visible constant. The man page
mentiones is but I do not really see how is the userspace supposed to
use it.
Yeah, userspace is not supposed to use it
changed, 15 insertions(+), 13 deletions(-)
The SoB list is a bit weird... otherwise, the conversion
obviously makes sense:
Acked-by: Davidlohr Bueso
On Thu, 09 Feb 2017, Hugh Dickins wrote:
I haven't checked, but are you sure that "populated" does nothing
when the attacher had previously called mlockall(MCL_FUTURE)?
I checked and you are certainly right. Andrew, please do not
consider this patch, it's bogus.
Thanks,
Davidlohr
On Fri, 10 Feb 2017, Michal Hocko wrote:
On Thu 09-02-17 12:53:02, Davidlohr Bueso wrote:
The SHM_HUGE_* stuff was introduced in:
42d7395feb5 (mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB)
It unnecessarily adds another layer, specific to sysv shm, without
anything special about
Hi,
Here are some more updates and fixes I noticed while going through
more sysv shm code after the recent shmat(2) fix.
I know it's a bit late in the game, but please consider for v4.11.
Passes ltp tests.
Thanks!
Davidlohr Bueso (4):
ipc/shm: do not check for MAP_POPULATE
ipc/shm:
inlock for semaphores). Therefore,
extend this to all ipc.
The effect of cacheline alignment on sems can be seen in sembench,
which deals mostly with semtimedop wait/wakes is seen to improve raw
throughput (worker loops) between 8 to 12% on a 24-core x86 with
over 4 threads.
Signed-off-by: David
k.org
Signed-off-by: Davidlohr Bueso
---
mm/mmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index 499b988b1639..40b29aca18c1 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1479,7 +1479,7 @@ SYSCALL_DEFINE6(mmap_pgoff, unsigned long, addr, unsigned
long
We do not support prefaulting functionality in sysv shm,
nor MAP_NONBLOCK for that matter. Drop the pointless check
for populate in do_shmat().
Signed-off-by: Davidlohr Bueso
---
ipc/shm.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index 06ea9ef7f54a
... cleans up early flag and address minutia.
Signed-off-by: Davidlohr Bueso
---
ipc/shm.c | 16 +++-
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index 6b3769967789..9c960241e214 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -1095,11 +1095,11
.
Signed-off-by: Davidlohr Bueso
---
ipc/shm.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index 81203e8ba013..7512b4fecff4 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -1091,8 +1091,8 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, cmd, struct
... it's already using the generic version anyways, so just
drop the file as do the other archs that do not implement
their own version of the current macro.
Cc: lennox...@gmail.com
Cc: liqin.li...@gmail.com
Signed-off-by: Davidlohr Bueso
---
arch/score/include/asm/Kbuild| 1 +
arch/
Hi Andrew,
This is a resend of straightforward arch patches that I had no
response from the maintainers but are trivial enough that I
hope you can pick them up, like you did the m32r one.
Thanks!
Davidlohr Bueso (4):
alpha: use generic current.h
cris: use generic current.h
parisc: use
Given that the arch does not add its own implementations, simply
use the asm-generic/current.h (generic-y) header instead of
duplicating code.
Cc: star...@axis.com
Cc: linux-cris-ker...@axis.com
Signed-off-by: Davidlohr Bueso
---
arch/cris/include/asm/Kbuild| 1 +
arch/cris/include/asm
Given that the arch does not add its own implementations, simply
use the asm-generic/current.h (generic-y) header instead of
duplicating code.
Cc: j...@parisc-linux.org
Cc: linux-par...@vger.kernel.org
Signed-off-by: Davidlohr Bueso
---
arch/parisc/include/asm/Kbuild| 1 +
arch/parisc
Given that the arch does not add its own implementations, simply
use the asm-generic/current.h (generic-y) header instead of
duplicating code.
Cc: linux-al...@vger.kernel.org
Cc: r...@twiddle.net
Signed-off-by: Davidlohr Bueso
---
arch/alpha/include/asm/Kbuild| 1 +
arch/alpha/include/asm
Commit-ID: 0754445d71c37a7afd4f0790a9be4cf53c1b8cc4
Gitweb: http://git.kernel.org/tip/0754445d71c37a7afd4f0790a9be4cf53c1b8cc4
Author: Davidlohr Bueso
AuthorDate: Sun, 29 Jan 2017 07:15:31 -0800
Committer: Ingo Molnar
CommitDate: Wed, 1 Feb 2017 10:02:18 +0100
sched/wake_q: Clarify
Commit-ID: 7e1f9467d1e48c64c27d6a32de1bfd1b9cdb1002
Gitweb: http://git.kernel.org/tip/7e1f9467d1e48c64c27d6a32de1bfd1b9cdb1002
Author: Davidlohr Bueso
AuthorDate: Sun, 29 Jan 2017 07:42:12 -0800
Committer: Ingo Molnar
CommitDate: Wed, 1 Feb 2017 09:17:51 +0100
sched/wait, rcuwait: Fix
Forgot to update the comment after renaming the call.
Signed-off-by: Davidlohr Bueso
---
include/linux/rcuwait.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h
index 0e93d56c7ab2..a4ede51b3e7c 100644
--- a/include/linux
As of bcc9a76d5ac (locking/rwsem: Reinit wake_q after use), the
comment regarding the list reinitialization no longer applies,
update it with the new wake_q_init() helper.
Signed-off-by: Davidlohr Bueso
---
include/linux/sched.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
On Thu, 22 Dec 2016, Bueso wrote:
+ WARN_ON(current->exit_state); \
While not related to this patch, but per 3245d6acab9 (exit: fix race
between wait_consider_task() and wait_task_zombie()), should we not
*_ONCE() all things ->exit_state? I'm not really
Commit-ID: 52b94129f274937e4c25dd17b76697664a3c43c9
Gitweb: http://git.kernel.org/tip/52b94129f274937e4c25dd17b76697664a3c43c9
Author: Davidlohr Bueso
AuthorDate: Wed, 11 Jan 2017 07:22:26 -0800
Committer: Ingo Molnar
CommitDate: Sat, 14 Jan 2017 11:14:35 +0100
locking/percpu-rwsem
Commit-ID: 642fa448ae6b3a4e5e8737054a094173405b7643
Gitweb: http://git.kernel.org/tip/642fa448ae6b3a4e5e8737054a094173405b7643
Author: Davidlohr Bueso
AuthorDate: Tue, 3 Jan 2017 13:43:14 -0800
Committer: Ingo Molnar
CommitDate: Sat, 14 Jan 2017 11:14:16 +0100
sched/core: Remove
Commit-ID: 8f95c90ceb541a38ac16fec48c05142ef1450c25
Gitweb: http://git.kernel.org/tip/8f95c90ceb541a38ac16fec48c05142ef1450c25
Author: Davidlohr Bueso
AuthorDate: Wed, 11 Jan 2017 07:22:25 -0800
Committer: Ingo Molnar
CommitDate: Sat, 14 Jan 2017 11:14:33 +0100
sched/wait, RCU
Commit-ID: 5376f2e722026e91cb46384bda8d8b3e9f88217c
Gitweb: http://git.kernel.org/tip/5376f2e722026e91cb46384bda8d8b3e9f88217c
Author: Davidlohr Bueso
AuthorDate: Tue, 3 Jan 2017 13:43:12 -0800
Committer: Ingo Molnar
CommitDate: Sat, 14 Jan 2017 11:14:13 +0100
drivers/tty: Compute
Commit-ID: d269a8b8c57523a2e328c1ff44fe791e13df3d37
Gitweb: http://git.kernel.org/tip/d269a8b8c57523a2e328c1ff44fe791e13df3d37
Author: Davidlohr Bueso
AuthorDate: Tue, 3 Jan 2017 13:43:13 -0800
Committer: Ingo Molnar
CommitDate: Sat, 14 Jan 2017 11:14:14 +0100
kernel/locking: Compute
Commit-ID: 0039962a1473f07fd5c8355bd8264be1eb87eb3e
Gitweb: http://git.kernel.org/tip/0039962a1473f07fd5c8355bd8264be1eb87eb3e
Author: Davidlohr Bueso
AuthorDate: Tue, 3 Jan 2017 13:43:11 -0800
Committer: Ingo Molnar
CommitDate: Sat, 14 Jan 2017 11:14:11 +0100
kernel/exit: Compute
a writer
can wait for its turn to take the lock. As such, we can avoid the
queue handling and locking overhead.
Reviewed-by: Oleg Nesterov
Signed-off-by: Davidlohr Bueso
---
include/linux/percpu-rwsem.h | 8
kernel/locking/percpu-rwsem.c | 7 +++
2 files changed, 7 insertions
701 - 800 of 2506 matches
Mail list logo