On Fri, 24 Feb 2023, freak07 wrote:
Here are some measurements from a Pixel 7 Pro that´s running a kernel either
with the Per-VMA locks patchset or without.
If there´s interest I can provide results of other specific apps as well.
Results are from consecutive cold app launches issued with "am
On Thu, 26 Jan 2023, Suren Baghdasaryan wrote:
To simplify the usage of VM_LOCKED_CLEAR_MASK in vm_flags_clear(),
replace it with VM_LOCKED_MASK bitmask and convert all users.
Might be good to mention explicitly no change in semantics, but
otherwise lgtm
Reviewed-by: Davidlohr Bueso
(), vm_flags_reset(), etc?
This would be more idiomatic and I do think the most-significant-first
naming style is preferable.
I tend to prefer this naming yes, but lgtm regardless.
Reviewed-by: Davidlohr Bueso
On Wed, 11 Jan 2023, Suren Baghdasaryan wrote:
On Wed, Jan 11, 2023 at 8:13 AM Davidlohr Bueso wrote:
On Mon, 09 Jan 2023, Suren Baghdasaryan wrote:
>To keep vma locking correctness when vm_flags are modified, add modifier
>functions to be used whenever flags are updated.
How about
On Mon, 09 Jan 2023, Suren Baghdasaryan wrote:
To keep vma locking correctness when vm_flags are modified, add modifier
functions to be used whenever flags are updated.
How about moving this patch and the ones that follow out of this series,
into a preliminary patchset? It would reduce the
On Mon, 09 Jan 2023, Suren Baghdasaryan wrote:
This configuration variable will be used to build the support for VMA
locking during page fault handling.
This is enabled by default on supported architectures with SMP and MMU
set.
The architecture support is needed since the page fault handler
On Thu, 09 Jun 2022, Sebastian Andrzej Siewior wrote:
On 2022-05-30 16:15:10 [-0700], Davidlohr Bueso wrote:
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index d0eab5700dc5..31b1900489e7 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi
Tasklets have long been deprecated as being too heavy on the system
by running in irq context - and this is not a performance critical
path. If a higher priority process wants to run, it must wait for
the tasklet to finish before doing so. Use a workqueue instead and
run in task context - albeit
instead and deal with the async
work in task context.
Cc: Michael Cyr
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Davidlohr Bueso
---
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 17 +++--
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.h | 1 -
2 files changed, 7 insertions(+), 11
Tasklets have long been deprecated as being too heavy on the system
by running in irq context - and this is not a performance critical
path. If a higher priority process wants to run, it must wait for
the tasklet to finish before doing so.
Process srps asynchronously in process context in a
Hi,
On Mon, 09 Nov 2020, Michal Simek wrote:
Sysace IP is no longer used on Xilinx PowerPC 405/440 and Microblaze
systems. The driver is not regularly tested and very likely not working for
quite a long time that's why remove it.
Is there a reason this patch was never merged? can the driver
-by: Nicholas Piggin
Signed-off-by: Davidlohr Bueso
---
Changes from v1:
Added small description and labeling smp_cond_load_relaxed requested by Nick.
Added Nick's ack.
arch/powerpc/include/asm/barrier.h | 16
arch/powerpc/include/asm/qspinlock.h | 7 +++
2 files changed, 7
On Tue, 16 Mar 2021, Nicholas Piggin wrote:
One request, could you add a comment in place that references
smp_cond_load_relaxed() so this commit can be found again if
someone looks at it? Something like this
/*
* smp_cond_load_relaxed was found to have performance problems if
* implemented
On Tue, 09 Mar 2021, Michal Such�nek wrote:
On Mon, Mar 08, 2021 at 05:59:50PM -0800, Davidlohr Bueso wrote:
49a7d46a06c3 (powerpc: Implement smp_cond_load_relaxed()) added
busy-waiting pausing with a preferred SMT priority pattern, lowering
the priority (reducing decode cycles) during
( 0.00%)15243.14 * 1.48%*
Hmean 51214891.27 ( 0.00%)15162.11 * 1.82%*
Measuring the dbench4 Per-VFS Operation latency, shows some very minor
differences within the noise level, around the 0-1% ranges.
Signed-off-by: Davidlohr Bueso
---
arch/powerpc/include/asm/barrier.h
contention on the main
qdisc lock. So any races against spin_is_locked() for archs that
use LL/SC for spin_lock() will be benign and not break any mutual
exclusion; furthermore, both the seqlock and busylock have the same
scope.
Cc: parri.and...@gmail.com
Cc: pab...@redhat.com
Signed-off-by: Davidlohr
Instead of both queued and simple spinlocks doing it. Move
it into the arch's spinlock.h.
Signed-off-by: Davidlohr Bueso
---
arch/powerpc/include/asm/qspinlock.h | 2 --
arch/powerpc/include/asm/simple_spinlock.h | 3 ---
arch/powerpc/include/asm/spinlock.h| 3 +++
3 files changed
win.
Thanks!
Davidlohr Bueso (3):
powerpc/spinlock: Define smp_mb__after_spinlock only once
powerpc/spinlock: Unserialize spin_is_locked
powerpc/qspinlock: Use generic smp_cond_load_relaxed
arch/powerpc/include/asm/barrier.h | 16
arch/powerpc/include/asm/qspinlock.h
On Fri, 20 Mar 2020, Peter Zijlstra wrote:
On Fri, Mar 20, 2020 at 01:55:26AM -0700, Davidlohr Bueso wrote:
- swait_event_interruptible_exclusive(*wq, ((!vcpu->arch.power_off) &&
- (!vcpu->arch.pause)));
+ rcuwait_wa
On Sat, 21 Mar 2020, Thomas Gleixner wrote:
This is the third and hopefully final version of this work. The second one
can be found here:
Would you rather I send in a separate series with the kvm changes, or
should I just send a v2 with the fixes here again?
Thanks,
Davidlohr
of the lockless waiter check from one waitqueue type to
the other.
Signed-off-by: Thomas Gleixner
Cc: Arnd Bergmann
Reviewed-by: Davidlohr Bueso
On Fri, 20 Mar 2020, Sebastian Andrzej Siewior wrote:
I though that v2 has it fixed with the previous commit (acpi: Remove
header dependency). The kbot just reported that everything is fine.
Let me look???
Nah my bad, that build did not have the full series applied :)
Sorry for the noise.
flavor.
Signed-off-by: Davidlohr Bueso
---
include/linux/swait.h | 23 +--
1 file changed, 5 insertions(+), 18 deletions(-)
diff --git a/include/linux/swait.h b/include/linux/swait.h
index 73e06e9986d4..6e5b5d0e64fd 100644
--- a/include/linux/swait.h
+++ b/include/linux/swait.h
with this change.
Cc: Paolo Bonzini
Signed-off-by: Davidlohr Bueso
---
Only compiled and tested on x86.
arch/powerpc/include/asm/kvm_host.h | 2 +-
arch/powerpc/kvm/book3s_hv.c| 10 --
arch/x86/kvm/lapic.c| 2 +-
include/linux/kvm_host.h| 10
Let the caller know if wake_up_process() was actually called or not;
some users can use this information for ad-hoc. Of course returning
true does not guarantee that wake_up_process() actually woke anything
up.
Signed-off-by: Davidlohr Bueso
---
include/linux/rcuwait.h | 2 +-
kernel/exit.c
The 'trywake' name was renamed to simply 'wake',
update the comment.
Signed-off-by: Davidlohr Bueso
---
kernel/exit.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/exit.c b/kernel/exit.c
index 0b81b26a872a..6cc6cc485d07 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
On Wed, 18 Mar 2020, Thomas Gleixner wrote:
The PS3 one got converted by Peter Zijlstra to rcu_wait().
While at it, I think it makes sense to finally convert the kvm vcpu swait
to rcuwait (patch 6/15 starts the necessary api changes). I'm sending
some patches on top of this patchset.
On Wed, 18 Mar 2020, Thomas Gleixner wrote:
--- a/include/linux/rcuwait.h
+++ b/include/linux/rcuwait.h
@@ -3,6 +3,7 @@
#define _LINUX_RCUWAIT_H_
#include
+#include
So this is causing build to fail for me:
CC arch/x86/boot/compressed/cmdline.o
On Wed, 18 Mar 2020, Thomas Gleixner wrote:
AFAICT the kthread uses TASK_INTERRUPTIBLE to not increase loadavg, kthreads
cannot receive signals by default and this one doesn't look different. Use
TASK_IDLE instead.
Hmm it seems in general this needs to be done kernel-wide. This kthread abuse
On Wed, 18 Mar 2020, Thomas Gleixner wrote:
+Owner semantics
+===
+
+Most lock types in the Linux kernel have strict owner semantics, i.e. the
+context (task) which acquires a lock has to release it.
+
+There are two exceptions:
+
+ - semaphores
+ - rwsems
+
+semaphores have no
On Tue, 23 Apr 2019, Bueso wrote:
On Wed, 03 Apr 2019, Daniel Jordan wrote:
On Wed, Apr 03, 2019 at 06:58:45AM +0200, Christophe Leroy wrote:
Le 02/04/2019 à 22:41, Daniel Jordan a écrit :
With locked_vm now an atomic, there is no need to take mmap_sem as
writer. Delete and refactor
On Wed, 03 Apr 2019, Daniel Jordan wrote:
On Wed, Apr 03, 2019 at 06:58:45AM +0200, Christophe Leroy wrote:
Le 02/04/2019 à 22:41, Daniel Jordan a écrit :
> With locked_vm now an atomic, there is no need to take mmap_sem as
> writer. Delete and refactor accordingly.
Could you please detail
On Tue, 23 Apr 2019, Peter Zijlstra wrote:
Also; the initial motivation was prefaulting large VMAs and the
contention on mmap was killing things; but similarly, the contention on
the refcount (I did try that) killed things just the same.
Right, this is just like what can happen with per-vma
On Tue, 02 Apr 2019, Andrew Morton wrote:
Also, we didn't remove any down_write(mmap_sem)s from core code so I'm
thinking that the benefit of removing a few mmap_sem-takings from a few
obscure drivers (sorry ;)) is pretty small.
afaik porting the remaining incorrect users of locked_vm to
On Fri, 22 Mar 2019, Linus Torvalds wrote:
Some of them _might_ be performance-critical. There's the one on
mmap_sem in the fault handling path, for example. And yes, I'd expect
the normal case to very much be "no other readers or writers" for that
one.
Yeah, the mmap_sem case in the fault
On Fri, 08 Feb 2019, Waiman Long wrote:
I am planning to run more performance test and post the data sometimes
next week. Davidlohr is also going to run some of his rwsem performance
test on this patchset.
So I ran this series on a 40-core IB 2 socket with various worklods in
mmtests. Below
On Thu, 07 Feb 2019, Waiman Long wrote:
30 files changed, 1197 insertions(+), 1594 deletions(-)
Performance numbers on numerous workloads, pretty please.
I'll go and throw this at my mmap_sem intensive workloads
I've collected.
Thanks,
Davidlohr
On Thu, 07 Sep 2017, Laurent Dufour wrote:
The commit b5c8f0fd595d ("powerpc/mm: Rework mm_fault_error()") reviewed
the way the error path is managed in __do_page_fault() but it was a bit too
agressive when handling a case by returning without releasing the mmap_sem.
By the way, replacing
On Fri, 28 Oct 2016, Pan Xinhui wrote:
/*
* If we need to reschedule bail... so we can block.
+* Use vcpu_is_preempted to detech lock holder preemption issue
^^ detect
+ * and break.
Could you
On Wed, 23 Dec 2015, Boqun Feng wrote:
There is one thing we should be aware of, that is the bug:
http://lkml.kernel.org/r/5669d5f2.5050...@caviumnetworks.com
which though has been fixed by:
http://lkml.kernel.org/r/20151217160549.gh6...@twins.programming.kicks-ass.net
Right, and fwiw the
I've left this series testing overnight on a power7 box and so far so good,
nothing has broken.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
-scalar types.
WRITE_ONCE() and READ_ONCE() were introduced in the commits 230fa253df63
(kernel: Provide READ_ONCE and ASSIGN_ONCE) and 43239cbe79fc (kernel:
Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)).
Signed-off-by: Andrey Konovalov andreyk...@google.com
Acked-by: Davidlohr Bueso dbu
counting to make sure that the exe file won't dissappear
underneath us while getting the dcookie.
Cc: Arnd Bergmann a...@arndb.de
Cc: Robert Richter r...@kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: cbe-oss-...@lists.ozlabs.org
Cc: oprofile-l...@lists.sourceforge.net
Signed-off-by: Davidlohr Bueso
On Wed, 2014-08-06 at 17:25 -0400, Andev wrote:
On Wed, Aug 6, 2014 at 4:54 PM, Kamal Mostafa ka...@canonical.com wrote:
This is a note to let you know that I have just added a patch titled
locking/mutex: Disable optimistic spinning on some architectures
to the linux-3.13.y-queue
On Sat, 2014-03-22 at 07:57 +0530, Srikar Dronamraju wrote:
So reverting and applying v3 3/4 and 4/4 patches works for me.
Ok, I verified that the above endds up resulting in the same tree as
the minimal patch I sent out, modulo (a) some comments and (b) an
#ifdef CONFIG_SMP in
On Thu, 2014-03-20 at 15:38 +0530, Srikar Dronamraju wrote:
This problem suggests that we missed a wakeup for a task that was adding
itself to the queue in a wait path. And the only place that can happen
is with the hb spinlock check for any pending waiters. Just in case we
missed some
On Wed, 2014-03-19 at 22:56 -0700, Davidlohr Bueso wrote:
On Thu, 2014-03-20 at 11:03 +0530, Srikar Dronamraju wrote:
Joy,.. let me look at that with ppc in mind.
OK; so while pretty much all the comments from that patch are utter
nonsense (what was I thinking), I cannot actually
On Thu, 2014-03-20 at 09:41 -0700, Linus Torvalds wrote:
On Wed, Mar 19, 2014 at 10:56 PM, Davidlohr Bueso davidl...@hp.com wrote:
This problem suggests that we missed a wakeup for a task that was adding
itself to the queue in a wait path. And the only place that can happen
is with the hb
On Thu, 2014-03-20 at 10:42 -0700, Linus Torvalds wrote:
On Thu, Mar 20, 2014 at 10:18 AM, Davidlohr Bueso davidl...@hp.com wrote:
It strikes me that the spin_is_locked() test has no barriers wrt the
writing of the new futex value on the wake path. And the read barrier
obviously does
On Thu, 2014-03-20 at 11:36 -0700, Linus Torvalds wrote:
On Thu, Mar 20, 2014 at 10:18 AM, Davidlohr Bueso davidl...@hp.com wrote:
Comparing with the patch I sent earlier this morning, looks equivalent,
and fwiw, passes my initial qemu bootup, which is the first way of
detecting anything
On Thu, 2014-03-20 at 12:25 -0700, Linus Torvalds wrote:
On Thu, Mar 20, 2014 at 12:08 PM, Davidlohr Bueso davidl...@hp.com wrote:
Oh, it does. This atomics technique was tested at a customer's site and
ready for upstream.
I'm not worried about the *original* patch. I'm worried about
On Wed, 2014-03-19 at 18:08 +0100, Peter Zijlstra wrote:
On Wed, Mar 19, 2014 at 04:47:05PM +0100, Peter Zijlstra wrote:
I reverted b0c29f79ecea0b6fbcefc999e70f2843ae8306db on top of v3.14-rc6
and confirmed that
reverting the commit solved the problem.
Joy,.. let me look at that
On Thu, 2014-03-20 at 11:03 +0530, Srikar Dronamraju wrote:
Joy,.. let me look at that with ppc in mind.
OK; so while pretty much all the comments from that patch are utter
nonsense (what was I thinking), I cannot actually find a real bug.
But could you try the below which replaces
53 matches
Mail list logo