Commit-ID: 3ccfebedd8cf54e291c809c838d8ad5cc00f5688
Gitweb: https://git.kernel.org/tip/3ccfebedd8cf54e291c809c838d8ad5cc00f5688
Author: Mathieu Desnoyers
AuthorDate: Mon, 29 Jan 2018 15:20:11 -0500
Committer: Ingo Molnar
CommitDate: Mon, 5 Feb 2018 21:34:02 +0100
powerpc, membarrier
Commit-ID: 667ca1ec7c9eb7ac3b80590b6597151b4c2a750b
Gitweb: https://git.kernel.org/tip/667ca1ec7c9eb7ac3b80590b6597151b4c2a750b
Author: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
AuthorDate: Mon, 29 Jan 2018 15:20:10 -0500
Committer: Ingo Molnar <mi...@kernel.org>
Com
Commit-ID: 667ca1ec7c9eb7ac3b80590b6597151b4c2a750b
Gitweb: https://git.kernel.org/tip/667ca1ec7c9eb7ac3b80590b6597151b4c2a750b
Author: Mathieu Desnoyers
AuthorDate: Mon, 29 Jan 2018 15:20:10 -0500
Committer: Ingo Molnar
CommitDate: Mon, 5 Feb 2018 21:33:29 +0100
membarrier/selftest
- On Feb 5, 2018, at 3:22 PM, Ingo Molnar mi...@kernel.org wrote:
> * Mathieu Desnoyers <mathieu.desnoy...@efficios.com> wrote:
>
>>
>> +config ARCH_HAS_MEMBARRIER_HOOKS
>> +bool
>
> Yeah, so I have renamed this to ARCH_HAS_MEMBARRIER_CALLBACKS, and
- On Feb 5, 2018, at 3:22 PM, Ingo Molnar mi...@kernel.org wrote:
> * Mathieu Desnoyers wrote:
>
>>
>> +config ARCH_HAS_MEMBARRIER_HOOKS
>> +bool
>
> Yeah, so I have renamed this to ARCH_HAS_MEMBARRIER_CALLBACKS, and propagated
> it
> throug
add.
Thanks to you both,
Mathieu
>
> Linus
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
> But if you have a git tree already set up, just holler.
I indeed have a git tree setup. The URL is in the pull request I sent you this
morning.
But I favor letting it go through Ingo's scheduler tree and benefit from the
extra
bit of automated testing this could add.
Thanks to you both,
M
his is
opt-in per architecture.
The other patches add selftests and documentation.
--------
Mathieu Desnoyers (11):
membarrier: selftest: Test private expedited cmd (v2)
powerpc: membarrier: Skip memory barrier in switch_mm() (v7)
membarri
his is
opt-in per architecture.
The other patches add selftests and documentation.
--------
Mathieu Desnoyers (11):
membarrier: selftest: Test private expedited cmd (v2)
powerpc: membarrier: Skip memory barrier in switch_mm() (v7)
membarri
but we'll probably end up adding and/or
changing tracepoints to help users out there who need tools analyzing this
scheduling data.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
/or
changing tracepoints to help users out there who need tools analyzing this
scheduling data.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Feb 3, 2018, at 4:43 PM, Linus Torvalds torva...@linux-foundation.org
wrote:
> On Sat, Feb 3, 2018 at 9:04 AM, Mathieu Desnoyers
> <mathieu.desnoy...@efficios.com> wrote:
>>
>> The approach proposed here will introduce an expectation that internal
>> fu
- On Feb 3, 2018, at 4:43 PM, Linus Torvalds torva...@linux-foundation.org
wrote:
> On Sat, Feb 3, 2018 at 9:04 AM, Mathieu Desnoyers
> wrote:
>>
>> The approach proposed here will introduce an expectation that internal
>> function signatures never change in the ker
really to address this "stable instrumentation" issue, I don't
think hooking on functions helps in any way. I hope we can work on defining
instrumentation interface rules in order to deal with the fundamental problem
of requiring tooling to adapt to kernel changes.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
really to address this "stable instrumentation" issue, I don't
think hooking on functions helps in any way. I hope we can work on defining
instrumentation interface rules in order to deal with the fundamental problem
of requiring tooling to adapt to kernel changes.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
: sched instrumentation on stable RT kernels
* timer API transition for kernel 4.15
* Fix: Don't nest get online cpus
* Fix: lttng_channel_syscall_mask() bool use in bitfield
* Fix: update kmem instrumentation for kernel 4.15
--
Mathieu Desnoyers
EfficiOS Inc.
http
: sched instrumentation on stable RT kernels
* timer API transition for kernel 4.15
* Fix: Don't nest get online cpus
* Fix: lttng_channel_syscall_mask() bool use in bitfield
* Fix: update kmem instrumentation for kernel 4.15
--
Mathieu Desnoyers
EfficiOS Inc.
http
to select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
CC: Thomas Gleixner <t...@linutronix.de>
CC: Andy Lutomirski <l...@kernel.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com&
to select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE.
Signed-off-by: Mathieu Desnoyers
Acked-by: Peter Zijlstra (Intel)
CC: Thomas Gleixner
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: D
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED commands.
Add checks expecting specific error values on system calls expected to
fail.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.sa
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED commands.
Add checks expecting specific error values on system calls expected to
fail.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
Acked-by: Greg Kroah-Hartman
Acked-by: Peter Zijlstra (Intel
Test the new MEMBARRIER_CMD_GLOBAL_EXPEDITED and
MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED commands.
Adapt to the MEMBARRIER_CMD_SHARED -> MEMBARRIER_CMD_GLOBAL rename.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.samsung.com>
GLOBAL", keeping an alias of
MEMBARRIER_CMD_SHARED to MEMBARRIER_CMD_GLOBAL for UAPI header backward
compatibility.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.
Test the new MEMBARRIER_CMD_GLOBAL_EXPEDITED and
MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED commands.
Adapt to the MEMBARRIER_CMD_SHARED -> MEMBARRIER_CMD_GLOBAL rename.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
Acked-by: Peter Zijlstra (Intel)
CC: Greg Kroah-Hartman
CC: Pau
GLOBAL", keeping an alias of
MEMBARRIER_CMD_SHARED to MEMBARRIER_CMD_GLOBAL for UAPI header backward
compatibility.
Signed-off-by: Mathieu Desnoyers
Acked-by: Peter Zijlstra (Intel)
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrens
ace thread. We currently have an implicit barrier
from atomic_dec_and_test() in mmdrop() that ensures this.
The x86 switch_mm_irqs_off() full barrier is currently provided by many
cpumask update operations as well as write_cr3(). Document that
write_cr3() provides this barrier.
Signed-off-by: Math
ace thread. We currently have an implicit barrier
from atomic_dec_and_test() in mmdrop() that ensures this.
The x86 switch_mm_irqs_off() full barrier is currently provided by many
cpumask update operations as well as write_cr3(). Document that
write_cr3() provides this barrier.
Signed-off-by: Math
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boqun Feng <boqun.f...@gmail.com>
CC: Andrew Hunt
Signed-off-by: Mathieu Desnoyers
Acked-by: Peter Zijlstra (Intel)
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Thomas Gleixner
CC: Ingo
hen returning to user-space,
or implement their architecture-specific sync_core_before_usermode().
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC: P
hen returning to user-space,
or implement their architecture-specific sync_core_before_usermode().
Signed-off-by: Mathieu Desnoyers
Acked-by: Peter Zijlstra (Intel)
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benja
ent mm into
active_mm) by adding a sync_core() in that specific case.
Use the new sync_core_before_usermode() to guarantee this.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC
ent mm into
active_mm) by adding a sync_core() in that specific case.
Use the new sync_core_before_usermode() to guarantee this.
Signed-off-by: Mathieu Desnoyers
Acked-by: Peter Zijlstra (Intel)
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivi
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE commands.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.samsung.com>
Acked-by: Peter Zijlstra (Intel) <pet...@in
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE commands.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
Acked-by: Peter Zijlstra (Intel)
CC: Greg Kroah-Hartman
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boqun Feng <boqun.f...@gmail.com>
CC: Andrew Hunter <a...@google.com>
CC: Maged Michael
Ensure that a core serializing instruction is issued before returning to
user-mode. x86 implements return to user-space through sysexit, sysrel,
and sysretq, which are not core serializing.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Peter Zijlstra (Intel
sts and documentation.
Thanks,
Mathieu
Mathieu Desnoyers (11):
membarrier: selftest: Test private expedited cmd (v2)
powerpc: membarrier: Skip memory barrier in switch_mm() (v7)
membarrier: Document scheduler barrier requirements (v5)
membarrier: provide GLOBAL_EXPEDITED command (v3)
mem
Ensure that a core serializing instruction is issued before returning to
user-mode. x86 implements return to user-space through sysexit, sysrel,
and sysretq, which are not core serializing.
Signed-off-by: Mathieu Desnoyers
Acked-by: Peter Zijlstra (Intel)
CC: Thomas Gleixner
CC: Andy
sts and documentation.
Thanks,
Mathieu
Mathieu Desnoyers (11):
membarrier: selftest: Test private expedited cmd (v2)
powerpc: membarrier: Skip memory barrier in switch_mm() (v7)
membarrier: Document scheduler barrier requirements (v5)
membarrier: provide GLOBAL_EXPEDITED command (v3)
mem
Signed-off-by: Mathieu Desnoyers
Acked-by: Peter Zijlstra (Intel)
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutomirski
CC:
- On Jan 29, 2018, at 2:09 PM, Peter Zijlstra pet...@infradead.org wrote:
> On Mon, Jan 29, 2018 at 06:36:05PM +0000, Mathieu Desnoyers wrote:
>> - On Jan 29, 2018, at 1:15 PM, Peter Zijlstra pet...@infradead.org wrote:
>
>> > Aaah, its the case where we do not p
- On Jan 29, 2018, at 2:09 PM, Peter Zijlstra pet...@infradead.org wrote:
> On Mon, Jan 29, 2018 at 06:36:05PM +0000, Mathieu Desnoyers wrote:
>> - On Jan 29, 2018, at 1:15 PM, Peter Zijlstra pet...@infradead.org wrote:
>
>> > Aaah, its the case where we do not p
- On Jan 29, 2018, at 1:15 PM, Peter Zijlstra pet...@infradead.org wrote:
> On Mon, Jan 29, 2018 at 07:04:14PM +0100, Peter Zijlstra wrote:
>> On Tue, Jan 23, 2018 at 10:57:30AM -0500, Mathieu Desnoyers wrote:
>> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>&
- On Jan 29, 2018, at 1:15 PM, Peter Zijlstra pet...@infradead.org wrote:
> On Mon, Jan 29, 2018 at 07:04:14PM +0100, Peter Zijlstra wrote:
>> On Tue, Jan 23, 2018 at 10:57:30AM -0500, Mathieu Desnoyers wrote:
>> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>&
Hi Ingo, Hi Peter,
Please let me know if you find anything that prevents you from integrating
this patchset into the scheduler tree.
Thanks,
Mathieu
- On Jan 23, 2018, at 10:57 AM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
> Hi Ingo, Peter, Thomas,
>
> Here is th
Hi Ingo, Hi Peter,
Please let me know if you find anything that prevents you from integrating
this patchset into the scheduler tree.
Thanks,
Mathieu
- On Jan 23, 2018, at 10:57 AM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
> Hi Ingo, Peter, Thomas,
>
> Here is th
-rcu/
Project website: http://liburcu.org
Git repository: git://git.liburcu.org/urcu.git
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
-rcu/
Project website: http://liburcu.org
Git repository: git://git.liburcu.org/urcu.git
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boqun Feng <boqun.f...@gmail.com>
CC: Andrew Hunter <a...@google.com>
CC: Maged Michael <maged.mich...@gmai
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutomirski
CC: Ingo Molnar
CC
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED commands.
Add checks expecting specific error values on system calls expected to
fail.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.sa
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED commands.
Add checks expecting specific error values on system calls expected to
fail.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
Acked-by: Greg Kroah-Hartman
CC: Peter Zijlstra
CC: Paul E
Test the new MEMBARRIER_CMD_GLOBAL_EXPEDITED and
MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED commands.
Adapt to the MEMBARRIER_CMD_SHARED -> MEMBARRIER_CMD_GLOBAL rename.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.samsung.com>
Test the new MEMBARRIER_CMD_GLOBAL_EXPEDITED and
MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED commands.
Adapt to the MEMBARRIER_CMD_SHARED -> MEMBARRIER_CMD_GLOBAL rename.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
CC: Greg Kroah-Hartman
CC: Peter Zijlstra
CC: Paul E. McKenney
the scheduler has updated the curr->mm pointer (before
going back to user-space). They should then select
ARCH_HAS_MEMBARRIER_SYNC_CORE to enable support for that command on
their architecture.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infra
the scheduler has updated the curr->mm pointer (before
going back to user-space). They should then select
ARCH_HAS_MEMBARRIER_SYNC_CORE to enable support for that command on
their architecture.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun F
Ensure that a core serializing instruction is issued before returning to
user-mode. x86 implements return to user-space through sysexit, sysrel,
and sysretq, which are not core serializing.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Thomas Gleixner <t...@linu
Ensure that a core serializing instruction is issued before returning to
user-mode. x86 implements return to user-space through sysexit, sysrel,
and sysretq, which are not core serializing.
Signed-off-by: Mathieu Desnoyers
CC: Thomas Gleixner
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE commands.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.samsung.com>
CC: Greg Kroah-Hartman <gre...@linuxfoundation
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE commands.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
CC: Greg Kroah-Hartman
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
ent mm into
active_mm) by adding a sync_core() in that specific case.
Use the new sync_core_before_usermode() to guarantee this.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC
ent mm into
active_mm) by adding a sync_core() in that specific case.
Use the new sync_core_before_usermode() to guarantee this.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Be
to select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Thomas Gleixner <t...@linutronix.de>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boqu
to select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE.
Signed-off-by: Mathieu Desnoyers
CC: Thomas Gleixner
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
ace thread. We currently have an implicit barrier
from atomic_dec_and_test() in mmdrop() that ensures this.
The x86 switch_mm_irqs_off() full barrier is currently provided by many
cpumask update operations as well as write_cr3(). Document that
write_cr3() provides this barrier.
Signed-off-by: Math
ace thread. We currently have an implicit barrier
from atomic_dec_and_test() in mmdrop() that ensures this.
The x86 switch_mm_irqs_off() full barrier is currently provided by many
cpumask update operations as well as write_cr3(). Document that
write_cr3() provides this barrier.
Signed-off-by: Math
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boqun Feng <boqun.f...@gmail.com>
CC: Andrew Hunter <a
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Thomas Gleixner
CC: Ingo Molnar
CC: &q
GLOBAL", keeping an alias of
MEMBARRIER_CMD_SHARED to MEMBARRIER_CMD_GLOBAL for UAPI header backward
compatibility.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boq
GLOBAL", keeping an alias of
MEMBARRIER_CMD_SHARED to MEMBARRIER_CMD_GLOBAL for UAPI header backward
compatibility.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul M
rom
Android developers that the proposed ABI fits their use-case.
Only x86 32/64 and arm 64 implement this command so far. This is
opt-in per architecture.
The other patches add selftests and documentation.
Thanks,
Mathieu
Mathieu Desnoyers (11):
membarrier: selftest: Test private expedi
rom
Android developers that the proposed ABI fits their use-case.
Only x86 32/64 and arm 64 implement this command so far. This is
opt-in per architecture.
The other patches add selftests and documentation.
Thanks,
Mathieu
Mathieu Desnoyers (11):
membarrier: selftest: Test private expedi
- On Jan 17, 2018, at 1:13 PM, Andy Lutomirski l...@kernel.org wrote:
> On Wed, Jan 17, 2018 at 10:10 AM, Mathieu Desnoyers
> <mathieu.desnoy...@efficios.com> wrote:
>> - On Jan 17, 2018, at 12:53 PM, Andy Lutomirski l...@kernel.org wrote:
>>
>>> On Wed,
- On Jan 17, 2018, at 1:13 PM, Andy Lutomirski l...@kernel.org wrote:
> On Wed, Jan 17, 2018 at 10:10 AM, Mathieu Desnoyers
> wrote:
>> - On Jan 17, 2018, at 12:53 PM, Andy Lutomirski l...@kernel.org wrote:
>>
>>> On Wed, Jan 17, 2018 at 8:54 AM, Mathieu Desnoy
- On Jan 17, 2018, at 12:53 PM, Andy Lutomirski l...@kernel.org wrote:
> On Wed, Jan 17, 2018 at 8:54 AM, Mathieu Desnoyers
> <mathieu.desnoy...@efficios.com> wrote:
>> Ensure that a core serializing instruction is issued before returning to
>> user-mode. x86 impleme
- On Jan 17, 2018, at 12:53 PM, Andy Lutomirski l...@kernel.org wrote:
> On Wed, Jan 17, 2018 at 8:54 AM, Mathieu Desnoyers
> wrote:
>> Ensure that a core serializing instruction is issued before returning to
>> user-mode. x86 implements return to user-space throu
cumentation.
Thanks,
Mathieu
Mathieu Desnoyers (11):
membarrier: selftest: Test private expedited cmd (v2)
powerpc: membarrier: Skip memory barrier in switch_mm() (v7)
membarrier: Document scheduler barrier requirements (v5)
membarrier: provide GLOBAL_EXPEDITED command (v3)
mem
cumentation.
Thanks,
Mathieu
Mathieu Desnoyers (11):
membarrier: selftest: Test private expedited cmd (v2)
powerpc: membarrier: Skip memory barrier in switch_mm() (v7)
membarrier: Document scheduler barrier requirements (v5)
membarrier: provide GLOBAL_EXPEDITED command (v3)
mem
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED commands.
Add checks expecting specific error values on system calls expected to
fail.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.sa
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED commands.
Add checks expecting specific error values on system calls expected to
fail.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
Acked-by: Greg Kroah-Hartman
CC: Peter Zijlstra
CC: Paul E
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boqun Feng <boqun.f...@gmail.com>
CC: Andrew Hunter <a...@google.com>
CC: Maged Michael <maged.mich...@gmai
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutomirski
CC: Ingo Molnar
CC
Test the new MEMBARRIER_CMD_GLOBAL_EXPEDITED and
MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED commands.
Adapt to the MEMBARRIER_CMD_SHARED -> MEMBARRIER_CMD_GLOBAL rename.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.samsung.com>
Test the new MEMBARRIER_CMD_GLOBAL_EXPEDITED and
MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED commands.
Adapt to the MEMBARRIER_CMD_SHARED -> MEMBARRIER_CMD_GLOBAL rename.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
CC: Greg Kroah-Hartman
CC: Peter Zijlstra
CC: Paul E. McKenney
GLOBAL", keeping an alias of
MEMBARRIER_CMD_SHARED to MEMBARRIER_CMD_GLOBAL for UAPI header backward
compatibility.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boq
GLOBAL", keeping an alias of
MEMBARRIER_CMD_SHARED to MEMBARRIER_CMD_GLOBAL for UAPI header backward
compatibility.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul M
the scheduler has updated the curr->mm pointer (before
going back to user-space). They should then select
ARCH_HAS_MEMBARRIER_SYNC_CORE to enable support for that command on
their architecture.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infra
the scheduler has updated the curr->mm pointer (before
going back to user-space). They should then select
ARCH_HAS_MEMBARRIER_SYNC_CORE to enable support for that command on
their architecture.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun F
ent mm into
active_mm) by adding a sync_core() in that specific case.
Use the new sync_core_before_usermode() to guarantee this.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC
ent mm into
active_mm) by adding a sync_core() in that specific case.
Use the new sync_core_before_usermode() to guarantee this.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Be
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boqun Feng <boqun.f...@gmail.com>
CC: Andrew Hunter <a
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Thomas Gleixner
CC: Ingo Molnar
CC: &q
Ensure that a core serializing instruction is issued before returning to
user-mode. x86 implements return to user-space through sysexit, sysrel,
and sysretq, which are not core serializing.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Reviewed-by: Thomas Gleix
Ensure that a core serializing instruction is issued before returning to
user-mode. x86 implements return to user-space through sysexit, sysrel,
and sysretq, which are not core serializing.
Signed-off-by: Mathieu Desnoyers
Reviewed-by: Thomas Gleixner
CC: Peter Zijlstra
CC: Andy Lutomirski
CC
to select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
CC: Thomas Gleixner <t...@linutronix.de>
CC: Peter Zijlstra <pet...@infradead.org>
CC: Andy Lutomirski <l...@kernel.org>
CC: Paul E. McKenney <paul...@linux.vnet.ibm.com>
CC: Boqu
to select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE.
Signed-off-by: Mathieu Desnoyers
CC: Thomas Gleixner
CC: Peter Zijlstra
CC: Andy Lutomirski
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE commands.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Shuah Khan <shua...@osg.samsung.com>
CC: Greg Kroah-Hartman <gre...@linuxfoundation
Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE and
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE commands.
Signed-off-by: Mathieu Desnoyers
Acked-by: Shuah Khan
CC: Greg Kroah-Hartman
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
ace thread. We currently have an implicit barrier
from atomic_dec_and_test() in mmdrop() that ensures this.
The x86 switch_mm_irqs_off() full barrier is currently provided by many
cpumask update operations as well as write_cr3(). Document that
write_cr3() provides this barrier.
Signed-off-by: Math
1501 - 1600 of 7290 matches
Mail list logo