On 2019-03-15 09:35:44 [-0400], Steven Rostedt wrote:
> On Fri, 15 Mar 2019 12:11:30 +0100
> Sebastian Andrzej Siewior wrote:
>
> > +static void rcu_cpu_kthread_park(unsigned int cpu)
> > +{
>
> Should we add one of the trace_rcu_.. trace events here?
If it is requ
at
the RCU-boosting priority.
Reported-by: Thomas Gleixner
Tested-by: Mike Galbraith
Signed-off-by: Paul E. McKenney
[bigeasy: add rcunosoftirq option]
Signed-off-by: Sebastian Andrzej Siewior
---
kernel/rcu/tree.c| 132 ---
kernel/rcu/tree.h
.git.bris...@redhat.com
Signed-off-by: Thomas Gleixner
[bigeasy: preempt_disable() around wq_worker_sleeping() by Daniel Bristot de
Oliveira]
Signed-off-by: Sebastian Andrzej Siewior
---
kernel/sched/core.c | 88 +
kernel/workqueue.c | 54
rcu_read_lock_sched with rcu_read_lock and acquire the RCU lock
where it is not yet explicit acquired. Replace local_irq_disable() with
rcu_read_lock(). Update asserts.
Signed-off-by: Thomas Gleixner
[bigeasy: mangle changelog a little]
Signed-off-by: Sebastian Andrzej Siewior
---
kernel/workqueue.c | 93
The second patch was posted originally around v3.0-rc. While digging
through the archive it seems that there is nothing wrong with the patch
except for the wording of its description. I reworded that part.
I'm not sure if the first patch ever made it ever to lkml.
Both were in -RT for ages (and
On 2018-10-17 19:06:33 [+0200], Paolo Bonzini wrote:
> On 17/10/2018 19:05, Sebastian Andrzej Siewior wrote:
> > The function irqfd_wakeup() has flags defined as __poll_t and then it
> > has additional flags which is used for irqflags.
> >
> > Redefine the inner
On 2019-03-11 12:06:05 [+0100], To Dave Hansen wrote:
> On 2019-03-08 11:01:25 [-0800], Dave Hansen wrote:
> > On 3/8/19 10:08 AM, Sebastian Andrzej Siewior wrote:
> > > On 2019-02-25 10:16:24 [-0800], Dave Hansen wrote:
> > >>> + if (!cpu_
On 2019-03-08 11:01:25 [-0800], Dave Hansen wrote:
> On 3/8/19 10:08 AM, Sebastian Andrzej Siewior wrote:
> > On 2019-02-25 10:16:24 [-0800], Dave Hansen wrote:
> >>> + if (!cpu_feature_enabled(X86_FEATURE_OSPKE))
> >>> + return;
> >>> +
On 2019-03-11 11:46:00 [+0900], Sergey Senozhatsky wrote:
> On (03/08/19 15:02), Sebastian Andrzej Siewior wrote:
> > On 2019-02-12 15:30:03 [+0100], John Ogness wrote:
> >
> > you removed the whole `irq_work' thing. You can also remove the include
> > for linux/irq
On 2019-02-26 17:38:22 [+0100], Oleg Nesterov wrote:
> Hi Sebastian,
Hi Oleg,
> Sorry, I just noticed your email...
no worries.
> > So I assumed that while SIGUSR1 is handled SIGUSR2 will wait until the
> > current signal is handled. So no interruption. But then SIGSEGV is
> > probably the
On 2019-02-25 10:16:24 [-0800], Dave Hansen wrote:
> On 2/21/19 3:50 AM, Sebastian Andrzej Siewior wrote:
> > diff --git a/arch/x86/include/asm/fpu/internal.h
> > b/arch/x86/include/asm/fpu/internal.h
> > index 67e4805bccb6f..05f6fce62e9f1 100644
> > --- a/arch/x86
On 2019-02-25 10:08:10 [-0800], Dave Hansen wrote:
> On 2/21/19 3:50 AM, Sebastian Andrzej Siewior wrote:
> > @@ -111,6 +111,12 @@ static inline void __write_pkru(u32 pkru)
> > {
> > u32 ecx = 0, edx = 0;
> >
> > + /*
> > +* WRPKRU is
On 2019-03-07 18:14:46 [+], Julien Grall wrote:
> Hi Sebastian,
Hi,
> This description looks better. I will update the commit message. Do you mind
> if I had your signed-off-by as you provided the commit message?
Sure. However you might also want to "just" add something like
On 2019-02-12 15:30:03 [+0100], John Ogness wrote:
you removed the whole `irq_work' thing. You can also remove the include
for linux/irq_work.h.
Sebastian
On 2019-03-08 00:07:41 [+], Liu, Yongxin wrote:
> The lane is critical resource which needs to be protected. One CPU can use
> only one
> lane. If CPU number is greater than the number of total lane, the lane can be
> shared
> among CPUs.
>
> In non-RT kernel, get_cpu() disable preemption
On 2019-03-06 17:57:09 [+0800], Yongxin Liu wrote:
> In this change, we replace get_cpu/put_cpu with local_lock_cpu/
> local_unlock_cpu, and introduce per CPU variable "ndl_local_lock".
> Due to preemption on RT, this lock can avoid race condition for the
> same lane on the same CPU. When CPU
an now sleep and therefore cannot be used from
> interrupt context. Use a raw_spin_lock instead to prevent the lock to
> sleep.
>
> Signed-off-by: Julien Grall
Now that I had time to look at it, for the change itself:
Acked-by: Sebastian Andrzej Siewior
For the d
On 2019-03-04 17:21:57 [+], Julien Grall wrote:
> (CC correctly linux-rt-users)
>
> On 04/03/2019 17:20, Julien Grall wrote:
> > At the moment show_lock is implemented using spin_lock_t and called from
> > an interrupt context on Arm64. The following backtrace was triggered by:
> >
> > 42sh#
On 2019-02-18 15:07:51 [+], Julien Grall wrote:
> Hi,
Hi,
> > Wouldn't this arbitrarily increase softirq latency? Unconditionally
> > forbidding SIMD in softirq might make more sense. It depends on how
> > important the use cases are...
It would increase the softirq latency but the
On 2019-02-28 18:12:25 [+0100], Frederic Weisbecker wrote:
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -474,17 +474,62 @@ enum
…
> +static inline unsigned int local_softirq_pending(void)
> +{
> + return local_softirq_data() & SOFTIRQ_PENDING_MASK;
> +}
…
I'm still
Commit-ID: ad01423aedaa7c6dd62d560b73a3cb39e6da3901
Gitweb: https://git.kernel.org/tip/ad01423aedaa7c6dd62d560b73a3cb39e6da3901
Author: Sebastian Andrzej Siewior
AuthorDate: Tue, 12 Feb 2019 17:25:54 +0100
Committer: Thomas Gleixner
CommitDate: Thu, 28 Feb 2019 11:18:38 +0100
kthread
On 2019-02-12 18:14:15 [+0100], Frederic Weisbecker wrote:
> diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
> index 240419382978..ef9e4c752f56 100644
> --- a/include/linux/bottom_half.h
> +++ b/include/linux/bottom_half.h
> @@ -28,17 +28,7 @@ enum
>
> #define
On 2019-02-12 18:14:14 [+0100], Frederic Weisbecker wrote:
> __local_bh_disable_ip() is neither for strict internal use nor does it
> require the caller to disable hardirqs. Probaby a celebration for ancient
Probaby
> behaviour.
I think the point was to override the IP for the tracer. So
On 2019-02-12 18:14:02 [+0100], Frederic Weisbecker wrote:
> --- /dev/null
> +++ b/include/linux/softirq_vector.h
> @@ -0,0 +1,10 @@
could you please add a spdx header/identifier here?
> +SOFTIRQ_VECTOR(HI)
> +SOFTIRQ_VECTOR(TIMER)
> +SOFTIRQ_VECTOR(NET_TX)
> +SOFTIRQ_VECTOR(NET_RX)
>
Dear RT folks!
I'm pleased to announce the v4.19.25-rt16 patch set.
Changes since v4.19.25-rt15:
- The "preserve task state" change in cpu_chill() in the previous
release is responsible for missing a wake up. Reported by Mike
Galbraith.
- The x86-32 lazy preempt code was broken.
FE flag because it is not required.
ping
> Cc: Jani Nikula
> Cc: Joonas Lahtinen
> Cc: Rodrigo Vivi
> Cc: David Airlie
> Cc: Daniel Vetter
> Cc: intel-...@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Sebastian Andrzej Siewior
> ---
&
Galbraith
Signed-off-by: Sebastian Andrzej Siewior
---
kernel/time/hrtimer.c | 18 +-
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 6f2736ec4b8ef..e1040b80362c9 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/
On 2019-02-25 15:43:35 [+0100], Mike Galbraith wrote:
> Hi Sebastian,
Hi Mike,
> My box claims that this patch is busted. It argues its case by IO
> deadlocking any kernel this patch is applied to when spinning rust is
> flogged, including virgin 4.19-rt14, said kernel becoming stable again
>
Dear RT folks!
I'm pleased to announce the v4.19.23-rt14 patch set.
Changes since v4.19.23-rt13:
- Use the specified preempt mask in should_resched() on x86. Otherwise
a scheduling opportunity of non RT tasks could be missed.
- Preserve the task state in cpu_chill()
- Add two more
ered okay,
load it. Should something go wrong, return with an error and without
altering the original FPU registers.
The removal of "fpu__initialize()" is a nop because fpu->initialized is
already set for the user task.
Signed-off-by: Sebastian Andrzej Siewior
Acked-by: Borislav Petkov
-
There are no users of fpu__restore() so it is time to remove it.
The comment regarding fpu__restore() and TS bit is stale since commit
b3b0870ef3ffe ("i387: do not preload FPU state at task switch time")
and has no meaning since.
Signed-off-by: Sebastian Andrzej Siewior
---
Doc
ime an opcode is emulated. It makes the removal of
->initialized easier if the struct is also initialized in the FPU-less
case at the same time.
Move fpu__initialize() before the FPU check so it is also performed in
the FPU-less case.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav
fpu__clear().
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 4
1 file changed, 4 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index de83d0ed9e14e..2f044021fde2b 100644
--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/ker
Most users of __raw_xsave_addr() use a feature number, shift it to a
mask and then __raw_xsave_addr() shifts it back to the feature number.
Make __raw_xsave_addr() use the feature number as an argument.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
arch/x86/kernel
From: Rik van Riel
Add helper function that ensures the floating point registers for
the current task are active. Use with preemption disabled.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/api.h | 11 +++
arch/x86/include/asm
Dave Hansen says that the `wrpkru' is more expensive than `rdpkru'. It
has a higher cycle cost and it's also practically a (light) speculation
barrier.
As an optimisation read the current PKRU value and only write the new
one if it is different.
Signed-off-by: Sebastian Andrzej Siewior
_begin() block could set
fpu_fpregs_owner_ctx to NULL but a kernel thread does not use
user_fpu_begin().
This is a leftover from the lazy-FPU time.
Remove user_fpu_begin(), it does not change fpu_fpregs_owner_ctx's
content.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
ar
s and keep the !ia32_fxstate version. Copy only
the user_i387_ia32_struct data structure in the ia32_fxstate.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 146 ++-
1 file changed, 57 insertions(+), 89 deletions(-)
diff --git a/arch/x86/
Andrzej Siewior
---
arch/x86/mm/pkeys.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index 05bb9a44eb1c3..50f65fc1b9a3f 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -142,13 +142,6 @@ u32 init_pkru_value = PKRU_AD_KEY( 1
an earlier version of the patchset while
there still was lazy-FPU on x86.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 45 -
arch/x86/kernel/fpu/signal.c| 34 +-
2 files changed, 13
-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 6 ++
arch/x86/include/asm/thread_info.h | 2 ++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/include/asm/fpu/internal.h
b/arch/x86/include/asm/fpu/internal.h
index 05f6fce62e9f1..9a026d11b4f97 100644
--- a/arch/x86
to userspace
Sebastian Andrzej Siewior (17):
x86/fpu: Remove fpu->initialized usage in __fpu__restore_sig()
x86/fpu: Remove fpu__restore()
x86/fpu: Remove preempt_disable() in fpu__clear()
x86/fpu: Always init the `state' in fpu__clear()
x86/fpu: Remove fpu-&g
_fpu_begin() could also force to save FPU's registers after
fpu__initialize() without changing the outcome here.
Remove the preempt_disable() section in fpu__clear(), preemption here
does not hurt.
Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Borislav Petkov
---
arch/x86/kernel/fpu/core.c
Before this commit the kernel thread would end up
with a random value which it inherited from the previous user task.
Signed-off-by: Rik van Riel
[bigeasy: save pkru to xstate, no cache, don't use __raw_xsave_addr()]
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm
ature_mask consistently.
This results in changes to the kvm code as:
feature -> xfeature_mask
index -> xfeature_nr
Suggested-by: Dave Hansen
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/xstate.h | 4 ++--
arch/x86/kernel/fpu/xstate.c | 22 ++--
rror value and the caller handles it.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 35 +++-
arch/x86/kernel/fpu/signal.c| 62 +++--
2 files changed, 73 insertions(+), 24 deletions(-)
diff --git a/arch/x86/include/asm/fpu/int
gned-off-by: Sebastian Andrzej Siewior
---
arch/x86/entry/common.c | 8 +++
arch/x86/include/asm/fpu/api.h | 22 +-
arch/x86/include/asm/fpu/internal.h | 27 ---
arch/x86/include/asm/trace/fpu.h| 5 +-
arch/x86/kernel/fpu/core.c |
Start refactoring __fpu__restore_sig() by inlining
copy_user_to_fpregs_zeroing().
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 42
1 file changed, 19 insertions(+), 23 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b
From: Rik van Riel
The FPU registers need only to be saved if TIF_NEED_FPU_LOAD is not set.
Otherwise this has been already done and can be skipped.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 11 ++-
1 file changed, 10
During the context switch the xstate is loaded which also includes the
PKRU value.
If xstate is restored on return to userland it is required that the
PKRU value in xstate is the same as the one in the CPU.
Save the PKRU in xstate during modification.
Signed-off-by: Sebastian Andrzej Siewior
pdate the comment to reflect that the "state is always live".
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 35 ---
1 file changed, 8 insertions(+), 27 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/k
lid before switch_fpu_finish() is invoked so ->mm is seen of the
new task instead the old one.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/ia32/ia32_signal.c | 17 +++
arch/x86/include/asm/fpu/internal.h | 18
arch/x86/include/asm/fpu/types.h| 9
arch/x86/incl
On 2019-02-20 08:47:51 [+0100], Juri Lelli wrote:
> > In this case you prepare the wakeup and then wake the CPU anyway. There
> > should be no downside to this unless the housekeeping CPU is busy and in
> > irq-off regions which would increase the latency. Also in case of
> > cyclictest -d0
>
On 2019-02-14 14:37:14 [+0100], Juri Lelli wrote:
> Hi,
Hi,
> Now, I'm sending this and an RFC, as I'm wondering if the first behavior
> is actually what we want, and it is not odd at all for reasons that are
> not evident to me at the moment. In this case this posting might also
> function as a
On 2019-02-19 17:27:41 [+0100], Juri Lelli wrote:
> It is better. Warning message doesn't appear anymore.
Okay, thanks.
Sebastian
ave the task state on entry and restore it on return. Simply set the
state in order to avoid updating ->task_state_change.
Cc: stable...@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior
---
kernel/time/hrtimer.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/
read lock
is held. Use the same mechanism for the softirq-pending check.
Cc: stable...@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior
---
kernel/softirq.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 48ae7dae81b9c..25bc
On 2019-02-19 15:58:26 [+0100], Juri Lelli wrote:
> Hi,
Hi,
> I've been seeing those messages while running some stress tests (hog
> tasks pinned to CPUs).
>
> Have yet to see them after I applied this patch earlier this morning (it
> usually took not much time to reproduce).
So is it better or
The comment is obsolete since commit
5da70160462e8 ("hrtimer: Implement support for softirq based hrtimers")
because it is possible to let a specific hrtimer expire in softirq
context.
Remove the obsolete comment.
Signed-off-by: Sebastian Andrzej Siewior
---
include/linux/inter
should_resched() should check against preempt_offset after unmasking the
need-resched-bit. Otherwise should_resched() won't work for
preempt_offset != 0 and lazy-preempt set.
Cc: stable...@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/preempt.h | 2 +-
1 file
n it won't be masked
out because we never look at ksoftirqd's mask.
If there are still pending softirqs while going to idle check
ksoftirqd's and ktimersfotd's mask before complaining about unhandled
softirqs.
Cc: stable...@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior
---
kernel/soft
On 2019-02-13 10:35:53 [+0100], Borislav Petkov wrote:
…
> > + *
> > + * If TIF_NEED_FPU_LOAD is cleared then CPU's FPU registers are holding the
> > + * current content of current()'s FPU register state.
>
> "current content of current" - that's a lot of c...
>
> Make that
>
> "... then the
On 2019-02-13 10:30:25 [+0100], Borislav Petkov wrote:
> On Thu, Feb 07, 2019 at 11:43:25AM +0100, Sebastian Andrzej Siewior wrote:
> > They are accessible inside the region. But they should not be touched by
> > context switch code (and later BH).
> > Is that what you mea
On 2019-02-13 16:40:00 [+0100], Ard Biesheuvel wrote:
> > > This is equal what x86 is currently doing. The naming is slightly
> > > different, there is irq_fpu_usable().
> >
> > Yes, I think it's basically the same idea.
> >
> > It's been evolving a bit on both sides, but is quite similar now.
> >
On 2019-02-13 15:36:30 [+], Dave Martin wrote:
> On Wed, Feb 13, 2019 at 03:30:29PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2019-02-08 16:55:13 [+], Julien Grall wrote:
> > > When the kernel is compiled with CONFIG_KERNEL_MODE_NEON, some part of
> > > th
On 2019-02-08 14:12:33 [+0100], To Borislav Petkov wrote:
> Then we have lat_sig [0]. Without the series 64bit:
> |Signal handler overhead: 2.6839 microseconds
> |Signal handler overhead: 2.6996 microseconds
> |Signal handler overhead: 2.6821 microseconds
>
> with the series:
> |Signal handler
On 2019-02-08 16:55:13 [+], Julien Grall wrote:
> When the kernel is compiled with CONFIG_KERNEL_MODE_NEON, some part of
> the kernel may be able to use FPSIMD/SVE. This is for instance the case
> for crypto code.
>
> Any use of FPSIMD/SVE in the kernel are clearly marked by using the
>
: David Airlie
Cc: Daniel Vetter
Cc: intel-...@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Sebastian Andrzej Siewior
---
drivers/gpu/drm/i915/i915_sw_fence.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c
b
ad's delay timer since all
operations occur under a lock.
Remove TIMER_IRQSAFE from the timer initialisation.
Use timer_setup() for initialisation purpose which is the official
function.
Cc: Petr Mladek
Cc: Ingo Molnar
Signed-off-by: Sebastian Andrzej Siewior
---
include/linux/kthread.h | 5 ++-
: Sebastian Andrzej Siewior
Cc: Guenter Roeck
Reported-and-tested-by: Steffen Trumtrar
Reported-by: Tim Sander
Signed-off-by: Julia Cartwright
Signed-off-by: Sebastian Andrzej Siewior
---
include/linux/kthread.h | 4 ++--
kernel/kthread.c| 42 -
2
On 2019-01-30 13:27:13 [+0100], Borislav Petkov wrote:
> On Wed, Jan 30, 2019 at 01:06:47PM +0100, Sebastian Andrzej Siewior wrote:
> > I don't know if hackbench would show anything besides noise.
>
> Yeah, if a sensible benchmark (dunno if hackbench is among them :))
> s
On 2019-01-30 13:53:51 [+0100], Borislav Petkov wrote:
> > I've been asked to add comment above the sequence so it is understood. I
> > think the general approach is easy to follow once the concept is
> > understood. I don't mind renaming the TIF_ thingy once again (it
> > happend once or twice
On 2019-01-30 12:43:22 [+0100], Borislav Petkov wrote:
> > @@ -171,9 +156,15 @@ int copy_fpstate_to_sigframe(void __user *buf, void
> > __user *buf_fx, int size)
> > sizeof(struct user_i387_ia32_struct), NULL,
> > (struct _fpstate_32 __user *) buf) ? -1 :
On 2019-01-30 12:55:07 [+0100], Borislav Petkov wrote:
> This definitely needs to be written somewhere in
>
> arch/x86/include/asm/fpu/internal.h
>
> or where we decide to put the FPU handling rules.
Added:
Index: staging/arch/x86/include/asm/fpu/internal.h
On 2019-01-23 10:09:24 [-0800], Dave Hansen wrote:
> On 1/9/19 3:47 AM, Sebastian Andrzej Siewior wrote:
> > +static inline void __write_pkru(u32 pkru)
> > +{
> > + /*
> > +* Writting PKRU is expensive. Only write the PKRU value if it is
> > +
On 2019-01-28 19:49:59 [+0100], Borislav Petkov wrote:
> > --- a/arch/x86/kernel/fpu/xstate.c
> > +++ b/arch/x86/kernel/fpu/xstate.c
> > @@ -830,15 +830,15 @@ static void *__raw_xsave_addr(struct xregs_state
> > *xsave, int xfeature_nr)
…
> > -void *get_xsave_addr(struct xregs_state *xsave, int
On 2019-01-28 19:23:49 [+0100], Borislav Petkov wrote:
> > diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
> > index b56d504af6545..31b66af8eb914 100644
> > --- a/arch/x86/include/asm/fpu/api.h
> > +++ b/arch/x86/include/asm/fpu/api.h
> > @@ -10,6 +10,7 @@
> >
> >
On 2019-02-06 15:01:14 [+0100], Borislav Petkov wrote:
> On Tue, Feb 05, 2019 at 07:03:37PM +0100, Sebastian Andrzej Siewior wrote:
> > Well, nothing changes in regard to the logic. Earlier we had a variable
> > which helped us to distinguish between user & kernel
On 2019-01-25 16:18:40 [+0100], Borislav Petkov wrote:
> Reviewed-by: Borislav Petkov
thanks.
> Should we do this microoptimization in addition, to save us the
> activation when the kernel thread here:
>
> taskA -> kernel thread -> taskA
>
> doesn't call kernel_fpu_begin() and thus
lid before switch_fpu_finish() is invoked so ->mm is seen of the
new task instead the old one.
Signed-off-by: Sebastian Andrzej Siewior
---
v1…v2:
- patch description changes.
- dropping brackets around a single statement in fpu__save().
arch/x86/ia32/ia32_signal.c | 17 +++-
arch/x8
On 2019-01-24 14:34:49 [+0100], Borislav Petkov wrote:
> > set it back to one) or don't return to userland.
> >
> > The context switch code (switch_fpu_prepare() + switch_fpu_finish())
> > can't unconditionally save/restore registers for kernel threads. I have
> > no idea what will happen if we
fpu__clear().
Signed-off-by: Sebastian Andrzej Siewior
---
v1…v2:
- reworte description. Replaced the "I don't know why it is like it is
makes no sense buh" part with some pointer which might explain why
copy_fxregs_to_kernel() ended there and since when it definitely is a
nop.
-
pdate the comment to reflect that the "state is always live".
Signed-off-by: Sebastian Andrzej Siewior
---
v1…v2: rewrite the patch description.
arch/x86/kernel/fpu/signal.c | 30 ++
1 file changed, 6 insertions(+), 24 deletions(-)
Index
On 2019-01-22 18:00:23 [+0100], Borislav Petkov wrote:
> On Tue, Jan 22, 2019 at 05:15:51PM +0100, Oleg Nesterov wrote:
> > I don't know... tried to google, found nothing.
> >
> > the comment in /usr/include/sys/ucontext.h mentions SysV/i386 ABI +
> > historical
> > reasons, this didn't help.
>
On 2019-01-21 12:21:17 [+0100], Oleg Nesterov wrote:
> > This is part of our ABI for *sure*. Inspecting that state is how
> > userspace makes sense of MPX or protection keys faults. We even use
> > this in selftests/.
>
> Yes.
>
> And in any case I do not understand the idea to use the second
On 2019-01-14 17:24:00 [+0100], Borislav Petkov wrote:
> > @@ -315,40 +313,33 @@ static int __fpu__restore_sig(void __user *buf, void
> > __user *buf_fx, int size)
…
> > - sanitize_restored_xstate(tsk, , xfeatures, fx_only);
> > +
On 2019-01-30 12:56:14 [+0100], Borislav Petkov wrote:
> > diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
> > index bf4e6caad305e..a25be217f9a2c 100644
> > --- a/arch/x86/kernel/fpu/signal.c
> > +++ b/arch/x86/kernel/fpu/signal.c
> > @@ -156,7 +156,16 @@ int
On 2019-01-30 12:35:55 [+0100], Borislav Petkov wrote:
> On Wed, Jan 09, 2019 at 12:47:22PM +0100, Sebastian Andrzej Siewior wrote:
> > This is a refurbished series originally started by by Rik van Riel. The
> > goal is load the FPU registers on return to userland and not on ev
t matter at all, no need to save the
> dentry in struct backing_dev_info, so delete it.
>
> Cc: Andrew Morton
> Cc: Anders Roxell
> Cc: Arnd Bergmann
> Cc: Sebastian Andrzej Siewior
> Cc: Michal Hocko
> Cc: linux...@kvack.org
> Signed-off-by: Greg Kroah-Hartman
with
gt; of debugfs much simpler (they do not have to ever check the return
> value), and everyone can rest easy.
Thank you.
> Reported-by: Masami Hiramatsu
> Reported-by: Ulf Hansson
> Reported-by: Gary R Hook
> Reported-by: Heiko Carstens
> Cc: stable
> Signed-off-by: Gr
On 2019-01-22 19:33:48 [+0100], Greg Kroah-Hartman wrote:
> On Tue, Jan 22, 2019 at 06:19:08PM +0100, Sebastian Andrzej Siewior wrote:
> > but if you cat the stats file then it will dereference the bdi struct
> > which has been free(), right?
>
> Maybe, I don't know, y
On 2019-01-22 17:25:03 [+0100], Greg Kroah-Hartman wrote:
> > > }
> > >
> > > static void bdi_debug_unregister(struct backing_dev_info *bdi)
> > > {
> > > - debugfs_remove(bdi->debug_stats);
> > > - debugfs_remove(bdi->debug_dir);
> > > + debugfs_remove_recursive(bdi->debug_dir);
> >
> >
On 2019-01-22 16:21:07 [+0100], Greg Kroah-Hartman wrote:
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 8a8bb8796c6c..85ef344a9c67 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -102,39 +102,25 @@ static int bdi_debug_stats_show(struct seq_file *m,
> void *v)
> }
>
On 2019-01-18 13:17:28 [-0800], Dave Hansen wrote:
> On 1/18/19 1:14 PM, Sebastian Andrzej Siewior wrote:
> > The kernel saves task's FPU registers on user's signal stack before
> > entering the signal handler. Can we avoid that and have in-kernel memory
> > for that? Does so
tl;dr
The kernel saves task's FPU registers on user's signal stack before
entering the signal handler. Can we avoid that and have in-kernel memory
for that? Does someone rely on the FPU registers from the task in the
signal handler?
On 2019-01-17 13:22:53 [+0100], Borislav Petkov wrote:
> > The
ose workarounds for older binutils
> can be dropped.
indeed.
> Signed-off-by: Borislav Petkov
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: "H. Peter Anvin"
> Cc: Sebastian Andrzej Siewior
> Cc: Andy Lutomirski
Acked-by: Sebastian Andrzej Siewior
Sebastian
o.
>
> However, read it on function entry instead to make the code even simpler
> to follow.
makes sense.
Acked-by: Sebastian Andrzej Siewior
Sebastian
On 2019-01-16 20:36:03 [+0100], Borislav Petkov wrote:
> On Wed, Jan 09, 2019 at 12:47:27PM +0100, Sebastian Andrzej Siewior wrote:
> > Since ->initialized is always true for user tasks and kernel threads
> > don't get this far,
>
> Yeah, this is commit message is too lac
On 2019-01-15 12:39:10 [-0500], Steven Rostedt wrote:
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -1742,6 +1742,13 @@ static int console_trylock_spinning(void)
> if (console_trylock())
> return 1;
>
> + /*
> + * The consoles are preemptable in
On 2019-01-15 12:44:53 [+], David Laight wrote:
> Once this is done it might be worth while adding a parameter to
> kernel_fpu_begin() to request the registers only when they don't
> need saving.
> This would benefit code paths where the gains are reasonable but not massive.
So if saving +
901 - 1000 of 6299 matches
Mail list logo