Fortunately Jason was able to reduce some of the overhead we
had introduced in the original rwsem optimistic spinning -
an it is now the same size as mutexes. Update the documentation
accordingly.
Acked-by: Jason Low jason.l...@hp.com
Signed-off-by: Davidlohr Bueso davidl...@hp.com
... as we clearly inline mcs_spin_lock() now.
Acked-by: Jason Low jason.l...@hp.com
Signed-off-by: Davidlohr Bueso davidl...@hp.com
---
kernel/locking/mcs_spinlock.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index 23e89c5
flexible. However by adding a new variable to the matter,
we can waste space with the unused field, ie: CONFIG_SMP
(!CONFIG_MUTEX_SPIN_ON_OWNER !CONFIG_DEBUG_MUTEX).
Acked-by: Jason Low jason.l...@hp.com
Signed-off-by: Davidlohr Bueso davidl...@hp.com
---
include/linux/mutex.h | 2 +-
kernel
the slowpath with the lock's counter
indicating it is unlocked. -- as returned by the asm fastpath call or by
explicitly setting it. While doing so, at least in theory, we can optimize
and allow faster lock stealing.
Signed-off-by: Davidlohr Bueso davidl...@hp.com
---
Changes from v1:
- Moved
On Wed, 2014-07-30 at 17:11 -0400, Johannes Weiner wrote:
Maintainers often repeat the same feedback on poorly written
changelogs - describe the problem, justify your changes, quantify
optimizations, describe user-visible changes - but our documentation
on writing changelogs doesn't include
On Thu, 2014-07-31 at 12:42 +0200, Peter Zijlstra wrote:
On Tue, Jul 29, 2014 at 02:39:40AM -0400, Rik van Riel wrote:
On Tue, 29 Jul 2014 13:24:05 +0800
Aaron Lu aaron...@intel.com wrote:
FYI, we noticed the below changes on
On Thu, 2014-07-31 at 17:30 -0700, Eric W. Biederman wrote:
This is small chance changing /proc/net and /proc/mounts will cause
userspace regressions (although nothing has shown up in my testing) if
that happens we can just point the change that moves them from
/proc/self/... to
On Fri, 2014-08-01 at 10:03 +0800, Aaron Lu wrote:
On Thu, Jul 31, 2014 at 12:42:41PM +0200, Peter Zijlstra wrote:
On Tue, Jul 29, 2014 at 02:39:40AM -0400, Rik van Riel wrote:
On Tue, 29 Jul 2014 13:24:05 +0800
Aaron Lu aaron...@intel.com wrote:
FYI, we noticed the below changes
On Thu, 2014-07-31 at 18:16 +0200, Jirka Hladky wrote:
Peter, I'm seeing regressions for
SINGLE SPECjbb instance for number of warehouses being the same as total
number of cores in the box.
Example: 4 NUMA node box, each CPU has 6 cores = biggest regression is
for 24 warehouses.
By
On Fri, 2014-08-01 at 13:46 -0700, Davidlohr Bueso wrote:
So both these are pretty similar, however, when reverting, on avg we
increase the amount of bops a mere ~4%:
tip/master + reverted:
Just to be clear, this is reverting a43455a1d57.
--
To unsubscribe from this list: send the line
On Sun, 2014-08-03 at 22:36 -0400, Waiman Long wrote:
The rwsem_can_spin_on_owner() function currently allows optimistic
spinning only if the owner field is defined and is running. That is
too conservative as it will cause some tasks to miss the opportunity
of doing spinning in case the owner
On Sun, 2014-08-03 at 22:36 -0400, Waiman Long wrote:
This patch set improves upon the rwsem optimistic spinning patch set
from Davidlohr to enable better performing rwsem and more aggressive
use of optimistic spinning.
By using a microbenchmark running 1 million lock-unlock operations per
of ELF binaries.
Signed-off-by: Davidlohr Bueso davidl...@hp.com
Cc: Andrew Morton a...@linux-foundation.org
Cc: Martin Schwidefsky schwidef...@de.ibm.com
Cc: Heiko Carstens heiko.carst...@de.ibm.com
Cc: James E.J. Bottomley j...@parisc-linux.org
Cc: Helge Deller del...@gmx.de
Cc: Benjamin
On Wed, 2014-08-13 at 00:52 +0300, Kirill A. Shutemov wrote:
On Tue, Aug 12, 2014 at 10:45:23AM -0700, Davidlohr Bueso wrote:
The most common way of iterating through the list of vmas, is via:
for (vma = mm-mmap; vma; vma = vma-vm_next)
This patch replaces this logic with a new
On Wed, 2014-08-13 at 00:52 +0300, Kirill A. Shutemov wrote:
On Tue, Aug 12, 2014 at 10:45:23AM -0700, Davidlohr Bueso wrote:
The most common way of iterating through the list of vmas, is via:
for (vma = mm-mmap; vma; vma = vma-vm_next)
This patch replaces this logic with a new
On Tue, 2014-08-12 at 21:43 +0200, Manfred Spraul wrote:
sem_lock right now contains an smp_mb().
I think smp_rmb() would be sufficient - and performance of semop() with rmb()
is up to 10% faster. It would be a pairing of rmb() with spin_unlock().
The race we must protect against is:
this path is rarely called, the cost is really
never noticed.
Signed-off-by: Davidlohr Bueso davidl...@hp.com
---
Original thread: https://lkml.org/lkml/2014/8/8/37
kernel/locking/mutex.c | 43 +++
1 file changed, 43 insertions(+)
diff --git a/kernel/locking
On Thu, 2014-08-14 at 13:17 -0400, Waiman Long wrote:
On 08/14/2014 01:57 AM, Davidlohr Bueso wrote:
Mutexes lock-stealing functionality allows another task to
skip its turn in the wait-queue and atomically acquire the lock.
This is fine and a nice optimization, however, when releasing
On Wed, 2014-07-23 at 11:39 -0400, Nick Krause wrote:
I guess this is another bad patch :(.
This is an example of you wasting peoples time with thoughtless patches.
Please stop. You've been asked a million times.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the
On Wed, 2014-07-23 at 12:25 -0400, Milosz Tanski wrote:
I'm using futexes to control scheduling for a userspace application with
multiple queues.
There's a global work queue and a specific pre-thread queue. And I would like
to have a
choice between waking up any thread or a specific
On Fri, 2014-08-01 at 16:12 +0800, Wanpeng Li wrote:
External interrupt will cause L1 vmexit w/ reason external interrupt when L2
is
running. Then L1 will pick up the interrupt through vmcs12 if L1 set the ack
interrupt bit. Commit 77b0f5d (KVM: nVMX: Ack and write vector info to
On Sun, 2014-08-03 at 22:36 -0400, Waiman Long wrote:
Even thought only the writers can perform optimistic spinning, there
is still a chance that readers may take the lock before a spinning
writer can get it. In that case, the owner field will be NULL and the
spinning writer can spin
On Mon, 2014-08-04 at 21:54 -0700, Davidlohr Bueso wrote:
#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
+/*
+ * The owner field is set to RWSEM_READ_OWNED if the last owner(s) are
+ * readers. It is not reset until a writer takes over and set it to its
+ * task structure pointer or NULL when
On Mon, 2014-08-04 at 22:30 -0700, Davidlohr Bueso wrote:
On Mon, 2014-08-04 at 21:54 -0700, Davidlohr Bueso wrote:
#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
+/*
+ * The owner field is set to RWSEM_READ_OWNED if the last owner(s) are
+ * readers. It is not reset until a writer takes over
/locking/mutex.o] Error 1
http://kisskb.ellerman.id.au/kisskb/buildresult/11616307/
Ah, indeed. Thanks for the report, afaict this was the only missing
arch .
8---
From: Davidlohr Bueso davidl...@hp.com
Subject: [PATCH] frv: Define
On Wed, 2014-08-06 at 17:25 -0400, Andev wrote:
On Wed, Aug 6, 2014 at 4:54 PM, Kamal Mostafa ka...@canonical.com wrote:
This is a note to let you know that I have just added a patch titled
locking/mutex: Disable optimistic spinning on some architectures
to the linux-3.13.y-queue
Hi Geert,
On Mon, 2014-04-21 at 09:52 +0200, Geert Uytterhoeven wrote:
Hi David,
On Mon, Apr 21, 2014 at 12:28 AM, Davidlohr Bueso davidl...@hp.com wrote:
On Sun, 2014-04-20 at 10:04 +0200, Geert Uytterhoeven wrote:
On Sun, Apr 20, 2014 at 4:26 AM, Davidlohr Bueso davidl...@hp.com wrote
On Thu, 2014-08-07 at 18:26 -0400, Waiman Long wrote:
v1-v2:
- Remove patch 1 which changes preempt_enable() to
preempt_enable_no_resched().
- Remove the RWSEM_READ_OWNED macro and assume readers own the lock
when owner is NULL.
- Reduce the spin threshold to 64.
So I still don't
On Thu, 2014-08-07 at 18:26 -0400, Waiman Long wrote:
On a highly contended rwsem, spinlock contention due to the slow
rwsem_wake() call can be a significant portion of the total CPU cycles
used. With writer lock stealing and writer optimistic spinning, there
is also a pretty good chance that
On Tue, 2014-08-05 at 10:42 -0700, Davidlohr Bueso wrote:
On Tue, 2014-08-05 at 15:04 +0200, Geert Uytterhoeven wrote:
It looks like you forgot to update frv? It's been failing on -next since a
few days:
Anyway developers can be alerted sooner about this (ie: while its still
in -next phase
On Thu, 2014-08-07 at 17:45 -0700, Davidlohr Bueso wrote:
On Thu, 2014-08-07 at 18:26 -0400, Waiman Long wrote:
On a highly contended rwsem, spinlock contention due to the slow
rwsem_wake() call can be a significant portion of the total CPU cycles
used. With writer lock stealing and writer
On Fri, 2014-08-08 at 09:52 +0300, Boaz Harrosh wrote:
On Thu, Aug 7, 2014 at 9:20 PM, One Thousand Gnomes
gno...@lxorguk.ukuu.org.uk wrote:
On Thu, 07 Aug 2014 17:03:08 +0300
Boaz Harrosh b...@plexistor.com wrote:
From: Boaz Harrosh b...@plexistor.com
Some programs like fdisk,
On Fri, 2014-08-08 at 14:30 -0400, Waiman Long wrote:
I have 2 issues about this. First of all, the timing windows between
atomic_set() and mutex_has_owner() check is really small, I doubt it
will be that effective.
That is true, which is why I didn't bother showing any performance data
in
On Fri, 2014-08-08 at 12:50 -0700, Jason Low wrote:
__visible __used noinline
@@ -730,6 +744,23 @@ __mutex_unlock_common_slowpath(struct mutex *lock, int
nested)
if (__mutex_slowpath_needs_to_unlock())
atomic_set(lock-count, 1);
+/*
+ * Skipping the
On Sun, 2014-08-10 at 17:41 -0400, Waiman Long wrote:
On 08/08/2014 03:03 PM, Davidlohr Bueso wrote:
On Fri, 2014-08-08 at 14:30 -0400, Waiman Long wrote:
I have 2 issues about this. First of all, the timing windows between
atomic_set() and mutex_has_owner() check is really small, I doubt
On Tue, 2014-07-29 at 21:55 +, Steven Stewart-Gallus wrote:
Hello,
I'm trying to debug a hangup where my program loops with FUTEX_WAIT (actually
FUTEX_WAIT_PRIVATE but same thing) endlessly erring out with EAGAIN. I would
like to know if anyone on the mailing list knows when FUTEX_WAIT
3a6bfbc9 (arch,locking: Ciao arch_mutex_cpu_relax()) broke building the frv
arch. Fixes errors such as:
kernel/locking/mcs_spinlock.h:87:2: error: implicit declaration of function
'cpu_relax_lowlatency'
Signed-off-by: Davidlohr Bueso davidl...@hp.com
---
Linus, as discussed, here's the resend
From: Davidlohr Bueso d...@stgolabs.net
Update our documentation as of fix 76835b0ebf8 (futex: Ensure
get_futex_key_refs() always implies a barrier). Explicitly
state that we don't do key referencing for private futexes.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
kernel/futex.c | 14
benchmarks, memcached and iozone with the -B option for mmap'ing.
*Untested* paths are nommu, memory-failure, uprobes and xip.
Applies on top of Linus' latest (3.18-rc1+c3351dfabf5c).
Thanks!
Davidlohr Bueso (10):
mm,fs: introduce helpers around the i_mmap_mutex
mm: use new helper functions around
Similarly to the anon memory counterpart, we can share
the mapping's lock ownership as the interval tree is
not modified when doing doing the walk, only the file
page.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Rik van Riel r...@redhat.com
---
include/linux/fs.h | 10 ++
mm
Convert all open coded mutex_lock/unlock calls to the
i_mmap_[lock/unlock]_write() helpers.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Rik van Riel r...@redhat.com
---
fs/hugetlbfs/inode.c| 4 ++--
kernel/events/uprobes.c | 4 ++--
kernel/fork.c | 4 ++--
mm
.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
mm/filemap_xip.c | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
index bad746b..0d105ae 100644
--- a/mm/filemap_xip.c
+++ b/mm/filemap_xip.c
@@ -155,22 +155,14
.
This conversion is straightforward. For now, all users take
the write lock.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Reviewed-by: Rik van Riel r...@redhat.com
---
fs/hugetlbfs/inode.c | 10 +-
fs/inode.c | 2 +-
include/linux/fs.h | 7
As per the comment in move_ptes(), we only require taking the
anon vma and i_mmap locks to ensure that rmap will always observe
either the old or new ptes, in the case of need_rmap_lock=true.
No modifications to the tree itself, thus share the i_mmap_rwsem.
Signed-off-by: Davidlohr Bueso dbu
Shrinking/truncate logic can call nommu_shrink_inode_mappings()
to verify that any shared mappings of the inode in question aren't
broken (dead zone). afaict the only user being ramfs to handle
the size change attribute.
Pretty much a no-brainer to share the lock.
Signed-off-by: Davidlohr Bueso
No brainer conversion: collect_procs_file() only schedules
a process for later kill, share the lock, similarly to
the anon vma variant.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
mm/memory-failure.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memory
the mapping
data.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
kernel/events/uprobes.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 045b649..7a9e620 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
Various parts of the kernel acquire and release this mutex,
so add i_mmap_lock_write() and immap_unlock_write() helper
functions that will encapsulate this logic. The next patch
will make use of these.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Reviewed-by: Rik van Riel r...@redhat.com
From: Davidlohr Bueso d...@stgolabs.net
The i_mmap_rwsem protects shared pages against races
when doing the sharing and unsharing, ultimately
calling huge_pmd_share/unshare() for PMD pages --
it also needs it to avoid races when populating the pud
for pmd allocation when looking for a shareable
On Mon, 2014-10-27 at 10:18 +0800, kernel test robot wrote:
FYI, we noticed the below changes on
commit 76835b0ebf8a7fe85beb03c75121419a7dec52f0 (futex: Ensure
get_futex_key_refs() always implies a barrier)
fwiw I was also able to reproduce similar results, with the hashing
costing
ping?
On Tue, 2014-10-14 at 00:27 -0700, Davidlohr Bueso wrote:
Hello,
I'm getting massive amounts of cpu soft lockups in Linus's tree for
today. This occurs almost immediately and is very reproducible in aim7
disk workloads using btrfs:
kernel:[ 559.800017] NMI watchdog: BUG: soft
On Fri, 2014-10-17 at 15:33 -0400, Josef Bacik wrote:
On 10/14/2014 03:27 AM, Davidlohr Bueso wrote:
Hello,
I'm getting massive amounts of cpu soft lockups in Linus's tree for
today. This occurs almost immediately and is very reproducible in aim7
disk workloads using btrfs:
I'm
if there's nothing to
wake up)
Cc: sta...@vger.kernel.org
Cc: Davidlohr Bueso davidl...@hp.com
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Darren Hart dvh...@linux.intel.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Peter Zijlstra pet...@infradead.org
Cc: Ingo Molnar mi...@kernel.org
On Sat, 2014-10-18 at 00:33 -0700, Davidlohr Bueso wrote:
On Fri, 2014-10-17 at 17:38 +0100, Catalin Marinas wrote:
Commit b0c29f79ecea (futexes: Avoid taking the hb-lock if there's
nothing to wake up) changes the futex code to avoid taking a lock when
there are no waiters. This code has
On Sat, 2014-10-18 at 14:32 -0500, Darren Hart wrote:
Which is not incomplete (lacking the explicit smp_mb()) added by this
patch. Perhaps the MB implementation of get_futex_key_refs() need not be
explicitly enumerated here?
Agreed, how about this:
diff --git a/kernel/futex.c b/kernel/futex.c
On Sat, 2014-10-18 at 13:50 -0700, Linus Torvalds wrote:
On Sat, Oct 18, 2014 at 12:58 PM, Davidlohr Bueso d...@stgolabs.net wrote:
And [get/put]_futex_keys() shouldn't even be called for private futexes.
The following patch had some very minor testing on a 60 core box last
night
On Mon, 2014-10-20 at 23:56 +0200, Peter Zijlstra wrote:
Hi,
I figured I'd give my 2010 speculative fault series another spin:
https://lkml.org/lkml/2010/1/4/257
Since then I think many of the outstanding issues have changed sufficiently to
warrant another go. In particular Al Viro's
...@redhat.com
Signed-off-by: Thomas Gleixner t...@linutronix.de
Reviewed-by: Davidlohr Bueso d...@stgolabs.net
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Just like Documentation/RCU/torture.txt, begin a document for the
locktorture module. This module is still pretty green, so I have
just added some specific sections to the doc (general desc, params,
usage, etc.). Further development should update the file.
Signed-off-by: Davidlohr Bueso dbu
and modprobing,
for instance in module_torture_begin().
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
include/linux/torture.h | 3 ++-
kernel/locking/locktorture.c | 3 ++-
kernel/rcu/rcutorture.c | 3 ++-
kernel/torture.c | 16 +---
4 files changed, 19 insertions
The statistics structure can serve well for both reader and writer
locks, thus simply rename some fields that mention 'write' and leave
the declaration of lwsa.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
kernel/locking/locktorture.c | 32
1 file changed
... to just 'torture_runnable'. It follows other variable naming
and is shorter.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
kernel/locking/locktorture.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
blocking locks, if run long enough it can have
the same torturous effect. Furthermore it is more representative of
mutex hold times and can stress better things like thrashing.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
Documentation/locking/locktorture.txt | 2 ++
kernel/locking
the right place for such info.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
kernel/locking/locktorture.c | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 414ba45..a6049fa 100644
--- a/kernel/locking
no particular order, please consider for v3.18.
Davidlohr Bueso (9):
locktorture: Rename locktorture_runnable parameter
locktorture: Add documentation
locktorture: Support mutexes
locktorture: Teach about lock debugging
locktorture: Make statistics generic
torture: Address race in module
We can easily do so with our new reader lock support. Just an arbitrary
design default: readers have higher (5x) critical region latencies than
writers: 50 ms and 10 ms, respectively.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
Documentation/locking/locktorture.txt | 2 ++
kernel/locking
will be the same number of writers threads.
Writer threads are interleaved with readers. Documentation is updated,
respectively.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
Documentation/locking/locktorture.txt | 16 +++-
kernel/locking/locktorture.c | 176 ++
2
The amount of global variables is getting pretty ugly. Group variables
related to the execution (ie: not parameters) in a new context structure.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
kernel/locking/locktorture.c | 161 ++-
1 file changed, 82
Cc'ing Randy.
On Thu, 2014-09-11 at 20:40 -0700, Davidlohr Bueso wrote:
Just like Documentation/RCU/torture.txt, begin a document for the
locktorture module. This module is still pretty green, so I have
just added some specific sections to the doc (general desc, params,
usage, etc.). Further
... when returning from a successful lock acquisition. The horror!
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
kernel/locking/rwsem-spinlock.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c
index
rw-semaphore is the only type of lock doing this ugliness of
exporting at the end of the file.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
---
kernel/locking/rwsem-xadd.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking
On Fri, 2014-09-12 at 09:37 +0200, Peter Zijlstra wrote:
On Thu, Sep 11, 2014 at 09:41:30PM -0700, Davidlohr Bueso wrote:
We can easily do so with our new reader lock support. Just an arbitrary
design default: readers have higher (5x) critical region latencies than
writers: 50 ms and 10 ms
On Fri, 2014-09-12 at 09:06 -0700, Paul E. McKenney wrote:
On Thu, Sep 11, 2014 at 09:40:41PM -0700, Davidlohr Bueso wrote:
In addition, introduce a new nreaders_stress module parameter. The
default number of readers will be the same number of writers threads.
Writer threads are interleaved
On Fri, 2014-09-12 at 11:04 -0700, Paul E. McKenney wrote:
On Thu, Sep 11, 2014 at 08:40:21PM -0700, Davidlohr Bueso wrote:
When performing module cleanups by calling torture_cleanup() the
'torture_type' string in nullified However, callers are not necessarily
done, and might still need
On Fri, 2014-09-12 at 11:02 -0700, Paul E. McKenney wrote:
On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
+static void torture_mutex_delay(struct torture_random_state *trsp)
+{
+ const unsigned long longdelay_ms = 100;
+
+ /* We want a long delay occasionally
On Fri, 2014-09-12 at 12:12 -0700, Paul E. McKenney wrote:
On Fri, Sep 12, 2014 at 11:56:31AM -0700, Davidlohr Bueso wrote:
On Fri, 2014-09-12 at 11:02 -0700, Paul E. McKenney wrote:
On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
+static void torture_mutex_delay(struct
On Wed, 2014-10-01 at 07:28 +0200, Ingo Molnar wrote:
If you compare an strace of AIM7 steady state and 'perf bench
lock' steady state, is it comparable, i.e. do the syscalls and
other behavioral patterns match up?
With more than 1000 users I'm seeing:
- 33.74%locking-creat
On Wed, 2014-10-01 at 14:12 -0300, Arnaldo Carvalho de Melo wrote:
Em Wed, Oct 01, 2014 at 07:28:32AM +0200, Ingo Molnar escreveu:
If you compare an strace of AIM7 steady state and 'perf bench
lock' steady state, is it comparable, i.e. do the syscalls and
Isn't lock too generic? Isn't
On Fri, 2014-10-03 at 09:36 -0600, Shuah Khan wrote:
msgque.key = ftok(argv[0], 822155650);
if (msgque.key == -1) {
- printf(Can't make key\n);
- return -errno;
+ printf(Can't make key: %d\n, -errno);
So printing a numeric value is quite
On Fri, 2014-10-03 at 13:42 -0600, Shuah Khan wrote:
On 10/03/2014 11:39 AM, Davidlohr Bueso wrote:
On Fri, 2014-10-03 at 09:36 -0600, Shuah Khan wrote:
msgque.key = ftok(argv[0], 822155650);
if (msgque.key == -1) {
- printf(Can't make key\n);
- return -errno
On Tue, 2014-09-30 at 12:12 +0200, Michael Kerrisk (man-pages) wrote:
Hi Doug,
On Mon, Sep 29, 2014 at 7:28 PM, Doug Ledford dledf...@redhat.com wrote:
On Mon, 2014-09-29 at 11:10 +0200, Michael Kerrisk (man-pages) wrote:
Hello Doug, David,
I think you two were the last ones to make
On Tue, 2014-09-30 at 10:30 -0700, Davidlohr Bueso wrote:
Agreed. And this needs to be changed back -- *although* there have been
0 bug reports afaict. Probably similarly to what we did with the
queues_max issue: stable since v3.5. Doug, any thoughts?
Note that by changing back, I don't mean
On Sat, 2014-10-25 at 01:45 +0300, Kirill A. Shutemov wrote:
On Fri, Oct 24, 2014 at 03:06:13PM -0700, Davidlohr Bueso wrote:
diff --git a/mm/fremap.c b/mm/fremap.c
index 72b8fa3..11ef7ec 100644
--- a/mm/fremap.c
+++ b/mm/fremap.c
@@ -238,13 +238,13 @@ get_write_lock
Shrinking/truncate logic can call nommu_shrink_inode_mappings()
to verify that any shared mappings of the inode in question aren't
broken (dead zone). afaict the only user being ramfs to handle
the size change attribute.
Pretty much a no-brainer to share the lock.
Signed-off-by: Davidlohr Bueso
the mapping
data.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
Cc: Oleg Nesterov o...@redhat.com
Acked-by: Kirill A. Shutemov kirill.shute...@intel.linux.com
---
kernel/events/uprobes.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions
As per the comment in move_ptes(), we only require taking the
anon vma and i_mmap locks to ensure that rmap will always observe
either the old or new ptes, in the case of need_rmap_lock=true.
No modifications to the tree itself, thus share the i_mmap_rwsem.
Signed-off-by: Davidlohr Bueso dbu
Various parts of the kernel acquire and release this mutex,
so add i_mmap_lock_write() and immap_unlock_write() helper
functions that will encapsulate this logic. The next patch
will make use of these.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Reviewed-by: Rik van Riel r...@redhat.com
Acked
No brainer conversion: collect_procs_file() only schedules
a process for later kill, share the lock, similarly to
the anon vma variant.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Kirill A. Shutemov kirill.shute...@intel.linux.com
---
mm/memory-failure.c | 4 ++--
1 file changed, 2
the interval tree remains intact.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Kirill A. Shutemov kirill.shute...@intel.linux.com
---
fs/hugetlbfs/inode.c | 4 ++--
mm/hugetlb.c | 12 ++--
mm/memory.c | 4 ++--
3 files changed, 10 insertions(+), 10 deletions(-)
diff
.
This conversion is straightforward. For now, all users take
the write lock.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Reviewed-by: Rik van Riel r...@redhat.com
Acked-by: Kirill A. Shutemov kirill.shute...@intel.linux.com
---
fs/hugetlbfs/inode.c | 10 +-
fs/inode.c
Convert all open coded mutex_lock/unlock calls to the
i_mmap_[lock/unlock]_write() helpers.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Acked-by: Kirill A. Shutemov kirill.shute...@intel.linux.com
---
fs/hugetlbfs/inode.c| 4 ++--
kernel/events
.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Kirill A. Shutemov kirill.shute...@intel.linux.com
---
mm/filemap_xip.c | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
index bad746b..0d105ae 100644
--- a/mm
Similarly to the anon memory counterpart, we can share
the mapping's lock ownership as the interval tree is
not modified when doing doing the walk, only the file
page.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Acked-by: Kirill A. Shutemov kirill.shute
tests pass, in fact more
tests pass with these changes than with an upstream kernel), ltp, aim7
benchmarks, memcached and iozone with the -B option for mmap'ing.
*Untested* paths are nommu, memory-failure, uprobes and xip.
Applies on top of linux-next (20141030).
Thanks!
Davidlohr Bueso (10):
mm
on task_faults_idx and numa_* was
changed in order to match the new logic.
Signed-off-by: Iulia Manda iulia.mand...@gmail.com
Acked-by: Davidlohr Bueso d...@stgolabs.net
With some suggestions below.
---
include/linux/sched.h | 40 ++
kernel/sched/core.c |3
On Mon, 2014-09-22 at 09:11 -0700, Josh Triplett wrote:
Many embedded systems will not need these syscalls, and omitting them
saves space. Add a new EXPERT config option CONFIG_ADVISE_SYSCALLS
(default y) to support compiling them out.
general question: if a user chooses
Hi Paul,
On Mon, 2014-09-22 at 14:46 -0700, Paul E. McKenney wrote:
4.Torture-test updates. These were posted to LKML at
https://lkml.org/lkml/2014/8/28/546 and at
https://lkml.org/lkml/2014/9/11/1114.
I was planning on sending you another batch of torture patches. Would
you
Hello Michael,
On Sun, 2014-09-07 at 07:00 -0700, Michael Kerrisk (man-pages) wrote:
Gidday,
The Linux man-pages maintainer proudly announces:
man-pages-3.72 - man pages for Linux
Tarball download:
http://www.kernel.org/doc/man-pages/download.html
Git repository:
On Fri, 2014-11-21 at 18:03 -0500, Rik van Riel wrote:
On 11/21/2014 03:42 PM, Andrew Morton wrote:
On Fri, 21 Nov 2014 15:29:27 -0500 Rik van Riel r...@redhat.com
wrote:
On 11/21/2014 03:09 PM, Andrew Morton wrote:
On Fri, 21 Nov 2014 14:52:26 -0500 Rik van Riel
r...@redhat.com
the bug that the customer reported, so I am unlikely
to give much in the way of useful testing results...
Andrew, feel free to give Manfred's patch my
Acked-by: Rik van Riel r...@redhat.com
Acked-by: Davidlohr Bueso d...@stgolabs.net
--
To unsubscribe from this list: send the line
801 - 900 of 4799 matches
Mail list logo