Commit-ID: 39388e80f9b0c3788bfb6efe3054bdce0c3ead45
Gitweb: https://git.kernel.org/tip/39388e80f9b0c3788bfb6efe3054bdce0c3ead45
Author: Sebastian Andrzej Siewior
AuthorDate: Wed, 3 Apr 2019 18:41:35 +0200
Committer: Borislav Petkov
CommitDate: Wed, 10 Apr 2019 14:46:35 +0200
x86/fpu
Commit-ID: 88f5260a3bf9bfb276b5b4aac2e81587e425a1d7
Gitweb: https://git.kernel.org/tip/88f5260a3bf9bfb276b5b4aac2e81587e425a1d7
Author: Sebastian Andrzej Siewior
AuthorDate: Wed, 3 Apr 2019 18:41:33 +0200
Committer: Borislav Petkov
CommitDate: Tue, 9 Apr 2019 19:28:06 +0200
x86/fpu
Commit-ID: 60e528d6ce3f60a058bbb64f8acb2a07f84b172a
Gitweb: https://git.kernel.org/tip/60e528d6ce3f60a058bbb64f8acb2a07f84b172a
Author: Sebastian Andrzej Siewior
AuthorDate: Wed, 3 Apr 2019 18:41:32 +0200
Committer: Borislav Petkov
CommitDate: Tue, 9 Apr 2019 19:27:46 +0200
x86/fpu
Commit-ID: 6dd677a044e606fd343e31c2108b13d74aec1ca5
Gitweb: https://git.kernel.org/tip/6dd677a044e606fd343e31c2108b13d74aec1ca5
Author: Sebastian Andrzej Siewior
AuthorDate: Wed, 3 Apr 2019 18:41:31 +0200
Committer: Borislav Petkov
CommitDate: Tue, 9 Apr 2019 19:27:42 +0200
x86/fpu
Commit-ID: 39ea9baffda91df8bfee9b45610242a3191ea1ec
Gitweb: https://git.kernel.org/tip/39ea9baffda91df8bfee9b45610242a3191ea1ec
Author: Sebastian Andrzej Siewior
AuthorDate: Wed, 3 Apr 2019 18:41:30 +0200
Committer: Borislav Petkov
CommitDate: Tue, 9 Apr 2019 19:27:29 +0200
x86/fpu
Dear RT folks!
I'm pleased to announce the v5.0.7-rt5 patch set.
Changes since v5.0.7-rt4:
- Update "x86: load FPU registers on return to userland" from v7 to
v9.
- Update "clocksource: improve Atmel TCB timer driver" from v7 to
latest post by Alexandre Belloni. I hope this works,
On 2019-04-12 19:17:59 [+0200], Borislav Petkov wrote:
> > @@ -327,7 +327,19 @@ static int __fpu__restore_sig(void __user *buf, void
> > __user *buf_fx, int size)
> > if (ret)
> > goto err_out;
> > envp =
> > + } else {
>
> I've added here:
I would
On 2019-04-12 18:48:28 [+0200], Borislav Petkov wrote:
> On Fri, Apr 12, 2019 at 06:37:41PM +0200, Sebastian Andrzej Siewior wrote:
> > (as you mentioned) so we would always record both trace points.
> > Therefore I would suggest to remove it.
>
> Remove which one?
remove x
On 2019-04-12 18:22:13 [+0200], Borislav Petkov wrote:
> On Fri, Apr 12, 2019 at 05:24:37PM +0200, Sebastian Andrzej Siewior wrote:
> > Isn't it called from fpu__clear()?
>
> $ git grep trace_x86_fpu_activate_state
> $
>
> all 23 patches applied. Grepping the la
On 2019-04-12 16:36:15 [+0200], Borislav Petkov wrote:
> On Wed, Apr 03, 2019 at 06:41:52PM +0200, Sebastian Andrzej Siewior wrote:
> > @@ -226,10 +236,9 @@ static void fpu__initialize(struct fpu *fpu)
> > {
> > WARN_ON_FPU(fpu != >thread.fpu);
> >
> > +
On 2019-04-10 18:52:41 [+0200], Borislav Petkov wrote:
> On Wed, Apr 10, 2019 at 06:36:15PM +0200, Borislav Petkov wrote:
> > Well, this is going in the wrong direction. The proper thing to do would
> > be to have:
> >
> > rdpkru()
> > wrpkru()
> >
> > which only do the inline asm with the
The bpf_prog_active counter is used to avoid recursion on the same CPU.
On RT we can't keep it with the preempt-disable part because the syscall
may need to acquire locks or allocate memory.
Use a locallock() to avoid recursion on the same CPU.
Signed-off-by: Sebastian Andrzej Siewior
On 2019-04-08 12:45:05 [-0700], Tejun Heo wrote:
> Hello,
Hi,
…
> This looks good from wq side. Peter, are you okay with routing this
> through the wq tree? If you wanna take it through the sched tree,
> please feel free to add
Thank you.
> Acked-by: Tejun Heo
>
> Thanks.
Sebastian
On 2019-04-08 11:14:28 [-0700], Dave Hansen wrote:
> On 4/3/19 9:41 AM, Sebastian Andrzej Siewior wrote:
> > During the context switch the xstate is loaded which also includes the
> > PKRU value.
> > If xstate is restored on return to userland it is required that the
>
On 2019-04-08 19:05:56 [+0200], Thomas Gleixner wrote:
> > diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
> > index a5b086ec426a5..f20e1d1fffa29 100644
> > --- a/arch/x86/kernel/fpu/signal.c
> > +++ b/arch/x86/kernel/fpu/signal.c
> > @@ -242,10 +242,10 @@
On 2019-04-05 16:17:50 [+0100], Julien Grall wrote:
> Hi,
Hi,
> > A per-CPU lock? It has to be a raw_spinlock_t because a normal
> > spin_lock() / local_lock() would allow scheduling and might be taken as
> > part of the context switch or soon after.
> raw_spinlock_t would not work here without
On 2019-03-22 18:59:23 [+0100], To Tejun Heo wrote:
> On 2019-03-22 10:43:34 [-0700], Tejun Heo wrote:
> > Hello,
Hi,
> > We can switch but it doesn't really say why we'd want to. Can you
> > please explain why this is better?
>
> there is this undocumented part. Avoiding the sched RCU means
On 2019-04-05 10:02:45 [+0100], Julien Grall wrote:
> RT folks already saw this corruption because local_bh_disable() does not
> preempt on RT. They are carrying a patch (see "arm64: fpsimd: use
> preemp_disable in addition to local_bh_disable()") to disable preemption
> along with
age. Should I resend the
patch with spinlock_irq_save() instead?
Signed-off-by: Sebastian Andrzej Siewior
---
drivers/char/random.c | 20 ++--
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 38c6d1af6d
On 2019-04-04 07:26:37 [-0700], Andy Lutomirski wrote:
> I think that David was asking whether we could make kernel_fpu_begin()
> regions sometimes be preemptible. The answer is presumably yes, but I
> think that should be a separate effort, and it should be justified
> with improved performance
On 2019-04-04 14:01:43 [+], David Laight wrote:
> From: Sebastian Andrzej Siewior
> > Sent: 03 April 2019 17:41
> ...
> > To access the FPU registers in kernel we need:
> > - disable preemption to avoid that the scheduler switches tasks. By
> > doing so
Signed-off-by: Sebastian Andrzej Siewior
---
Documentation/scheduler/sched-rt-group.txt | 2 +-
include/linux/sched/prio.h | 2 +-
kernel/sched/cpupri.h | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/Documentation/scheduler/sched-rt
x86/fpu: Always store the registers in copy_fpstate_to_sigframe()
x86/fpu: Prepare copy_fpstate_to_sigframe() for TIF_NEED_FPU_LOAD
x86/fpu: Defer FPU state load until return to userspace
Sebastian Andrzej Siewior (22):
x86/fpu: Remove fpu->initialized usage in __fpu__restore_s
_fpu_begin() could also force to save FPU's registers after
fpu__initialize() without changing the outcome here.
Remove the preempt_disable() section in fpu__clear(), preemption here
does not hurt.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
arch/x86/kernel/fpu/core.c
fpu__clear().
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 4
1 file changed, 4 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index de83d0ed9e14e..2f044021fde2b 100644
--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/ker
There are no users of fpu__restore() so it is time to remove it.
The comment regarding fpu__restore() and TS bit is stale since commit
b3b0870ef3ffe ("i387: do not preload FPU state at task switch time")
and has no meaning since.
Signed-off-by: Sebastian Andrzej Siewior
---
Doc
ered okay,
load it. Should something go wrong, return with an error and without
altering the original FPU registers.
The removal of "fpu__initialize()" is a nop because fpu->initialized is
already set for the user task.
Signed-off-by: Sebastian Andrzej Siewior
Acked-by: Borislav Petkov
-
ime an opcode is emulated. It makes the removal of
->initialized easier if the struct is also initialized in the FPU-less
case at the same time.
Move fpu__initialize() before the FPU check so it is also performed in
the FPU-less case.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav
pdate the comment to reflect that the "state is always live".
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 35 ---
1 file changed, 8 insertions(+), 27 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/k
_begin() block could set
fpu_fpregs_owner_ctx to NULL but a kernel thread does not use
user_fpu_begin().
This is a leftover from the lazy-FPU time.
Remove user_fpu_begin(), it does not change fpu_fpregs_owner_ctx's
content.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
ar
lid before switch_fpu_finish() is invoked so ->mm is seen of the
new task instead the old one.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/ia32/ia32_signal.c | 17 +++
arch/x86/include/asm/fpu/internal.h | 18
arch/x86/include/asm/fpu/types.h| 9
arch/x86/incl
() suffix. __write_pkru() will just invoke
__write_pkru_isn() but in a flowup patch will also read back the value.
Suggested-by: Dave Hansen
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/pgtable.h | 2 +-
arch/x86/include/asm/special_insns.h | 12 +---
arch/x86/kvm
Most users of __raw_xsave_addr() use a feature number, shift it to a
mask and then __raw_xsave_addr() shifts it back to the feature number.
Make __raw_xsave_addr() use the feature number as an argument.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
arch/x86/kernel
Dave Hansen says that the `wrpkru' is more expensive than `rdpkru'. It
has a higher cycle cost and it's also practically a (light) speculation
barrier.
As an optimisation read the current PKRU value and only write the new
one if it is different.
Signed-off-by: Sebastian Andrzej Siewior
an earlier version of the patchset while
there still was lazy-FPU on x86.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 19 ++-
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch
-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 6 ++
arch/x86/include/asm/thread_info.h | 2 ++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/include/asm/fpu/internal.h
b/arch/x86/include/asm/fpu/internal.h
index 82ff84a4c4ab7..b12874b7cf0cf 100644
--- a/arch/x86
During the context switch the xstate is loaded which also includes the
PKRU value.
If xstate is restored on return to userland it is required that the
PKRU value in xstate is the same as the one in the CPU.
Save the PKRU in xstate during modification.
Signed-off-by: Sebastian Andrzej Siewior
From: Rik van Riel
The FPU registers need only to be saved if TIF_NEED_FPU_LOAD is not set.
Otherwise this has been already done and can be skipped.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 12 +++-
1 file changed, 11
Start refactoring __fpu__restore_sig() by inlining
copy_user_to_fpregs_zeroing(). The orignal function remains and will be
used to restore from userland memory if possible.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 20 +++-
1 file changed, 19
Andrzej Siewior
---
arch/x86/mm/pkeys.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index 05bb9a44eb1c3..50f65fc1b9a3f 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -142,13 +142,6 @@ u32 init_pkru_value = PKRU_AD_KEY( 1
s and keep the !ia32_fxstate version. Copy only
the user_i387_ia32_struct data structure in the ia32_fxstate.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 146 ++-
1 file changed, 57 insertions(+), 89 deletions(-)
diff --git a/arch/x86/
gned-off-by: Sebastian Andrzej Siewior
---
arch/x86/entry/common.c | 8 +++
arch/x86/include/asm/fpu/api.h | 22 +-
arch/x86/include/asm/fpu/internal.h | 27 ---
arch/x86/include/asm/trace/fpu.h| 5 +-
arch/x86/kernel/fpu/core.c |
which can handle faults.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 34 ++
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index f20e1d1fffa29
rror value and the caller handles it.
copy_user_to_fpregs_zeroing() and its helpers remain and will be used
later for a fastpath optimisation.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 43
arch/x86/kernel/fpu/signal.c
mpt to save them directly. The direct save may fail
but should only happen on the first invocation or after fork() while the
page is RO.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --
ature_mask consistently.
This results in changes to the kvm code as:
feature -> xfeature_mask
index -> xfeature_nr
Suggested-by: Dave Hansen
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/xstate.h | 4 ++--
arch/x86/kernel/fpu/xstate.c | 22 ++--
the slowpath which can handle
pagefaults.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index a5b086ec426a5..f20e1d1fffa29 100644
`init_fpstate'
and initialize the PKRU value to 0.
Add the `init_pkru_value' to `init_fpstate' so it is set to the init
value in such a case.
In theory we could drop copy_init_pkru_to_fpregs() because restoring the
PKRU at return-to-userland should be enough.
Signed-off-by: Sebastian Andrzej Siewior
it the kernel thread would end up with a
random value which it inherited from the previous user task.
Signed-off-by: Rik van Riel
[bigeasy: save pkru to xstate, no cache, don't use __raw_xsave_addr()]
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm
From: Rik van Riel
Add helper function that ensures the floating point registers for
the current task are active. Use with preemption disabled.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/api.h | 11 +++
arch/x86/include/asm
On 2019-03-31 20:20:25 [+0200], Thomas Gleixner wrote:
>
> I think this should do the following:
>
> fpregs_lock();
> if (!test_thread_flag(TIF_NEED_FPU_LOAD)) {
> pagefault_disable();
> ret = copy_fpu_to_user(...);
> pagefault_enable();
>
Dear RT folks!
I'm pleased to announce the v5.0.5-rt3 patch set.
Changes since v5.0.5-rt2:
- Multiple fixes for build failures introduced by the printk series.
Reported by the kbuild test robot, patched by John Ogness.
- Various powerpc fixes:
- Reorder TIF bits so the assembly
On 2019-03-26 10:34:21 [+0100], Juri Lelli wrote:
> Hi,
Hi,
…
> # for I in `seq 10`; do fsfreeze -f ./testmount; sleep 1; fsfreeze -u
> ./testmount; done
>
> [ cut here ]
> DEBUG_LOCKS_WARN_ON(rt_mutex_owner(lock) != current)
> WARNING: CPU: 10 PID: 1226 at
This is invoked from the secondary CPU in atomic context. On x86 we use
tsc instead. On Power we XOR it against mftb() so lets use stack address
as the initial value.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/powerpc/include/asm/stackprotector.h | 4
1 file changed, 4 insertions
From: John Ogness
This commit contains addresses several build failures which were
reported by the kbuild test robot.
The fixes were folded into the original commits.
Reported-by: kbuild test robot
Signed-off-by: John Ogness
Signed-off-by: Sebastian Andrzej Siewior
---
arch/powerpc/kernel
immediates.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/powerpc/include/asm/thread_info.h | 13 -
arch/powerpc/kernel/entry_32.S | 12 +++-
arch/powerpc/kernel/entry_64.S | 12 +++-
3 files changed, 22 insertions(+), 15 deletions(-)
diff --git
The locallock protects the per-CPU variable tce_page. The function
attempts to allocate memory while tce_page is protected (by disabling
interrupts).
Use local_irq_save() instead of local_irq_disable().
Signed-off-by: Sebastian Andrzej Siewior
---
arch/powerpc/platforms/pseries/iommu.c | 16
On 2019-03-22 10:43:34 [-0700], Tejun Heo wrote:
> Hello,
Hi,
> We can switch but it doesn't really say why we'd want to. Can you
> please explain why this is better?
there is this undocumented part. Avoiding the sched RCU means also we
are more preemptible which is good :) Especially on -RT
On 2019-03-21 21:26:29 [+0100], To linux-kernel@vger.kernel.org wrote:
> diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
> index 5a467a381245c..052a16c96218f 100644
> --- a/arch/x86/kernel/fpu/signal.c
> +++ b/arch/x86/kernel/fpu/signal.c
> @@ -297,28 +298,63 @@ static int
On 2019-03-21 22:21:21 [-0500], Scott Wood wrote:
> rcu_bh is disabled on PREEMPT_RT via a stub ops that has no name. Thus,
> if a torture_type other than "rcu" is used, rcu_torture_init() will
> pass NULL to strcmp() when iterating over torture_ops[], and oops.
>
> Signed-off-by: Scott Wood
>
Thomas Gleixner
> [bigeasy: mangle changelog a little]
> Signed-off-by: Sebastian Andrzej Siewior
A gentle ping.
Sebastian
_begin() block could set
fpu_fpregs_owner_ctx to NULL but a kernel thread does not use
user_fpu_begin().
This is a leftover from the lazy-FPU time.
Remove user_fpu_begin(), it does not change fpu_fpregs_owner_ctx's
content.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
ar
() suffix. __write_pkru() will just invoke
__write_pkru_isn() but in a flowup patch will also read back the value.
Suggested-by: Dave Hansen
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/pgtable.h | 2 +-
arch/x86/include/asm/special_insns.h | 12 +---
arch/x86/kvm
ime an opcode is emulated. It makes the removal of
->initialized easier if the struct is also initialized in the FPU-less
case at the same time.
Move fpu__initialize() before the FPU check so it is also performed in
the FPU-less case.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav
From: Rik van Riel
Add helper function that ensures the floating point registers for
the current task are active. Use with preemption disabled.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/api.h | 11 +++
arch/x86/include/asm
Andrzej Siewior
---
arch/x86/mm/pkeys.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index 05bb9a44eb1c3..50f65fc1b9a3f 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -142,13 +142,6 @@ u32 init_pkru_value = PKRU_AD_KEY( 1
-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 6 ++
arch/x86/include/asm/thread_info.h | 2 ++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/include/asm/fpu/internal.h
b/arch/x86/include/asm/fpu/internal.h
index 82ff84a4c4ab7..b12874b7cf0cf 100644
--- a/arch/x86
Dave Hansen says that the `wrpkru' is more expensive than `rdpkru'. It
has a higher cycle cost and it's also practically a (light) speculation
barrier.
As an optimisation read the current PKRU value and only write the new
one if it is different.
Signed-off-by: Sebastian Andrzej Siewior
During the context switch the xstate is loaded which also includes the
PKRU value.
If xstate is restored on return to userland it is required that the
PKRU value in xstate is the same as the one in the CPU.
Save the PKRU in xstate during modification.
Signed-off-by: Sebastian Andrzej Siewior
There are no users of fpu__restore() so it is time to remove it.
The comment regarding fpu__restore() and TS bit is stale since commit
b3b0870ef3ffe ("i387: do not preload FPU state at task switch time")
and has no meaning since.
Signed-off-by: Sebastian Andrzej Siewior
---
Doc
ature_mask consistently.
This results in changes to the kvm code as:
feature -> xfeature_mask
index -> xfeature_nr
Suggested-by: Dave Hansen
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/xstate.h | 4 ++--
arch/x86/kernel/fpu/xstate.c | 22 ++--
pdate the comment to reflect that the "state is always live".
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 35 ---
1 file changed, 8 insertions(+), 27 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/k
Most users of __raw_xsave_addr() use a feature number, shift it to a
mask and then __raw_xsave_addr() shifts it back to the feature number.
Make __raw_xsave_addr() use the feature number as an argument.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
arch/x86/kernel
Start refactoring __fpu__restore_sig() by inlining
copy_user_to_fpregs_zeroing().
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 42
1 file changed, 19 insertions(+), 23 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b
rror value and the caller handles it.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 33 ++-
arch/x86/kernel/fpu/signal.c| 62 +++--
2 files changed, 72 insertions(+), 23 deletions(-)
diff --git a/arch/x86/include/asm/fpu/int
s and keep the !ia32_fxstate version. Copy only
the user_i387_ia32_struct data structure in the ia32_fxstate.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 146 ++-
1 file changed, 57 insertions(+), 89 deletions(-)
diff --git a/arch/x86/
_fpu_begin() could also force to save FPU's registers after
fpu__initialize() without changing the outcome here.
Remove the preempt_disable() section in fpu__clear(), preemption here
does not hurt.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
arch/x86/kernel/fpu/core.c
an earlier version of the patchset while
there still was lazy-FPU on x86.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 43 -
arch/x86/kernel/fpu/signal.c| 34 +--
2 files changed
ered okay,
load it. Should something go wrong, return with an error and without
altering the original FPU registers.
The removal of "fpu__initialize()" is a nop because fpu->initialized is
already set for the user task.
Signed-off-by: Sebastian Andrzej Siewior
Acked-by: Borislav Petkov
-
gned-off-by: Sebastian Andrzej Siewior
---
arch/x86/entry/common.c | 8 +++
arch/x86/include/asm/fpu/api.h | 22 +-
arch/x86/include/asm/fpu/internal.h | 27 ---
arch/x86/include/asm/trace/fpu.h| 5 +-
arch/x86/kernel/fpu/core.c |
fpu__clear().
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 4
1 file changed, 4 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index de83d0ed9e14e..2f044021fde2b 100644
--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/ker
From: Rik van Riel
The FPU registers need only to be saved if TIF_NEED_FPU_LOAD is not set.
Otherwise this has been already done and can be skipped.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 11 ++-
1 file changed, 10
lid before switch_fpu_finish() is invoked so ->mm is seen of the
new task instead the old one.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/ia32/ia32_signal.c | 17 +++
arch/x86/include/asm/fpu/internal.h | 18
arch/x86/include/asm/fpu/types.h| 9
arch/x86/incl
it the kernel thread would end up with a
random value which it inherited from the previous user task.
Signed-off-by: Rik van Riel
[bigeasy: save pkru to xstate, no cache, don't use __raw_xsave_addr()]
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm
`init_fpstate'
and initialize the PKRU value to 0.
Add the `init_pkru_value' to `init_fpstate' so it is set to the init
value in such a case.
In theory we could drop copy_init_pkru_to_fpregs() because restoring the
PKRU at return-to-userland should be enough.
Signed-off-by: Sebastian Andrzej Siewior
x86/fpu: Always store the registers in copy_fpstate_to_sigframe()
x86/fpu: Prepare copy_fpstate_to_sigframe() for TIF_NEED_FPU_LOAD
x86/fpu: Defer FPU state load until return to userspace
Sebastian Andrzej Siewior (19):
x86/fpu: Remove fpu->initialized usage in __fpu__restore_s
On 2019-03-12 17:06:18 [-0700], Dave Hansen wrote:
> Thanks for doing this. I'll see if I can dig into the test case and
> figure out why it's not doing what I expected. It might be that the CPU
> _could_ go back to the init state but chooses not to, or that something
> else is taking us out of
On 2019-03-20 16:46:01 [-0700], Paul E. McKenney wrote:
> Thank you! I reverted v2 and applied this one with the same sort of
> update. Testing is going well thus far aside from my failing to add
> the required "=0" after the rcutree.use_softirq. I will probably not
> be the only one who will
.
Reported-by: Thomas Gleixner
Tested-by: Mike Galbraith
Signed-off-by: Sebastian Andrzej Siewior
---
v2…v3: - Ensure that with RCU_BOOST=y the callback is invoked in thread
context. Pointed out by Joel Fernandes.
- Swap the init logic so it initializes the rcuc thread
On 2019-03-20 11:12:10 [-0700], Paul E. McKenney wrote:
> We could name it something like "use_softirq" and initialize it to true.
> I am OK either way.
I had to add one hunk to get it compiled. It worked then. Let me swap
the logic as you suggested and then I repost the whole thing. This will
On 2019-03-20 10:30:01 [-0700], Paul E. McKenney wrote:
> On Wed, Mar 20, 2019 at 05:35:32PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2019-03-20 09:15:00 [-0700], Paul E. McKenney wrote:
> > > I am considering making it a module_param() to avoid namespace pollution,
> &
Dear RT folks!
I'm pleased to announce the v5.0.3-rt1 patch set.
Changes since v4.19.25-rt16:
- rebase to v5.0
- Several ARM architectures have a so called "boot_lock" in their SMP
bring up code. In previous releases the boot_lock was converted to
to a raw_spinlock in order to get
On 2019-03-20 09:15:00 [-0700], Paul E. McKenney wrote:
> I am considering making it a module_param() to avoid namespace pollution,
> as it would become something like rcutree.nosoftirq.
>
> Thoughts?
nope, perfect.
> Thanx, Paul
Sebastian
On 2019-03-20 08:44:40 [-0700], Paul E. McKenney wrote:
>
> And it does seem to work better. I will give it more intense testing
> later on, but in the meantime I have merged this change into your
> earlier patch.
thanks.
> We will see whether or not I am able summon up the courage to push it
On 2019-03-14 08:09:51 [+0100], Juri Lelli wrote:
> Hi,
>
> On 07/03/19 13:09, Juri Lelli wrote:
> > Clocksource watchdog has been found responsible for generating latency
> > spikes (in the 10-20 us range) when woken up to check for TSC stability.
> >
> > Add an option to disable it at boot.
>
On 2019-03-19 12:44:19 [+0100], To Paul E. McKenney wrote:
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 0f31b79eb6761..0a719f726e149 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
…
> +/*
> + * Spawn per-CPU RCU core processing kthreads.
> + */
> +static int __init
On 2019-03-19 20:26:13 [-0400], Joel Fernandes wrote:
> > @@ -2769,19 +2782,121 @@ static void invoke_rcu_callbacks(struct rcu_data
> > *rdp)
> > {
> > if (unlikely(!READ_ONCE(rcu_scheduler_fully_active)))
> > return;
> > - if (likely(!rcu_state.boost)) {
> > -
On 2019-03-19 09:50:07 [-0700], Paul E. McKenney wrote:
> Besides, it looks very weird for me to have two Signed-off-by lines. ;-)
See commit 602cae04c4864 ("perf/x86/intel: Delay memory deallocation
until x86_pmu_dead_cpu()")
> In theory, the trace_rcu_utilization() should be added, just like
On 2019-03-19 08:59:23 [-0700], Paul E. McKenney wrote:
> I doubt that there is any code left from my original, so I set you as
> author.
I always forward ported it the patch over the years. So if it is no
longer what it was once so be it.
> I queued this and am starting tests without setting
at
the RCU-boosting priority.
Reported-by: Thomas Gleixner
Tested-by: Mike Galbraith
Signed-off-by: Paul E. McKenney
[bigeasy: add rcunosoftirq option]
Signed-off-by: Sebastian Andrzej Siewior
---
v1…v2:
- rebased to Paul's rcu/dev tree/branch
- Replaced Mike's email with @gm
801 - 900 of 6299 matches
Mail list logo