change its type to a spin_lock.
I *think* that we might even be able to remove the lock because all its
current user seem to have their own protection.
Signed-off-by: Sebastian Andrzej Siewior
---
drivers/iommu/amd_iommu.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
di
The variable of type struct irq_remap_table is always named `table'
except in amd_iommu_update_ga() where it is called `irt'. Make it
consistent and name it also `table'.
Signed-off-by: Sebastian Andrzej Siewior
---
drivers/iommu/amd_iommu.c | 10 +-
1 file changed, 5 i
Hi,
this is the rebase on top of iommu/x86/amd of my last series. It takes
Scotts comments into consideration from my v2.
It contains lock splitting and GFP_KERNEL allocation of remap-table.
Mostly cleanup.
The patches were boot tested on an AMD EPYC 7601.
Sebastian
e the lock is dropped since the same device can only be probed once.
However I check for both cases, just to be sure.
Signed-off-by: Sebastian Andrzej Siewior
---
drivers/iommu/amd_iommu.c | 65 +--
1 file changed, 46 insertions(+), 19 deletions(-)
irq_domain_mutex so the
initialization of iommu->irte_ops->set_allocated() should not race
against other user.
Signed-off-by: Sebastian Andrzej Siewior
---
drivers/iommu/amd_iommu.c | 34 --
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/d
: Sebastian Andrzej Siewior
---
drivers/iommu/amd_iommu.c | 12 +---
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index d4c2b1a11924..fcfdce70707d 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -81,6
ith its own (iommu's) lock.
So split out get_irq_table() out of amd_iommu_devtable_lock's lock.
Signed-off-by: Sebastian Andrzej Siewior
---
drivers/iommu/amd_iommu.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_
ope").
[0] https://lkml.kernel.org/r/ec8b4367-ef5b-5446-2cd0-9f8f7fd69...@monom.org
[1]
https://git.kernel.org/history/history/c/0e568881178ff0e0aceeafdb51f9fecab39e1923
Signed-off-by: Sebastian Andrzej Siewior
---
include/linux/posix-timers.h | 1 +
kernel/time/posix-timers.c | 41 +
On 2018-03-22 06:37:45 [-0300], Arnaldo Carvalho de Melo wrote:
> Em Wed, Mar 21, 2018 at 11:43:58AM -0700, Linus Torvalds escreveu:
> > [ Adding PeterZ to participants due to query about lockdep_assert() ]
> >
> > On Wed, Mar 21, 2018 at 8:38 AM, Arnaldo Carvalho de Melo
> > wrote:
> > >
> > >
Linus Torvalds
Signed-off-by: Sebastian Andrzej Siewior
---
v1…2: Add credits.
drivers/target/target_core_tmr.c | 2 --
drivers/target/target_core_transport.c | 6 --
2 files changed, 8 deletions(-)
diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_t
On 2018-02-27 14:39:34 [-0300], Mauro Carvalho Chehab wrote:
> Hi Sebastian,
Hi Mauro,
> Sorry for taking some time to test it, has been busy those days...
:)
> Anyway, I tested it today. Didn't work. It keep losing data.
Okay, this was unexpected. What I learned from the thread is that you
use
On 2018-03-07 09:45:29 [-0600], Corey Minyard wrote:
> > I have no idea what is the wisest thing to do here. The obvious fix
> > would be to use the irqsafe() variant here and not drop the lock between
> > wake ups. That is essentially what swake_up_all_locked() does which I
> > need for the comple
rtcoming I propose to use WAKE_Q. swake_up_all()
will queue all to be woken up tasks on wake-queue with interrupts
disabled which should be "quick". After dropping the lock (and enabling
interrupts) it can wake the tasks one after the other.
Reported-by: Corey Minyard
Signed-off-by:
On 2018-03-09 07:29:31 [-0600], Corey Minyard wrote:
> From what I can tell, wake_up_q() is unbounded, and you have undone what
> the previous code had tried to accomplish. In the scenario I'm talking
> about,
> interrupts are still disabled here. That's why I was asking about where to
> put
> wa
This reverts commit "block: blk-mq: Use swait". The issue remains but
will be fixed differently.
Signed-off-by: Sebastian Andrzej Siewior
---
block/blk-core.c | 6 +++---
block/blk-mq.c | 8
include/linux/blkdev.h | 2 +-
3 files changed, 8 insertions(+), 8
based wake queue can't be used due to
wake_up_all() usage and disabled interrupts in !RT configs (as reported
by Corey Minyard).
Uses work_queue() to invoke wake_up_all() in process context.
Signed-off-by: Sebastian Andrzej Siewior
---
block/blk-core.c | 13 -
include/lin
On 2018-03-13 21:10:39 [+0100], Peter Zijlstra wrote:
> On Tue, Mar 13, 2018 at 07:42:41PM +0100, Sebastian Andrzej Siewior wrote:
> > +static void blk_queue_usage_counter_release_swork(struct swork_event *sev)
> > +{
> > + struct request_queue *q =
> > +
On 2018-09-20 21:15:35 [-0700], Andy Lutomirski wrote:
> > I mean the fpu.initialized variable entirely. AFAIK, its only use is for
> > kernel threads — setting it to false lets us switch to a kernel thread and
> > back without saving and restoring. But TIF_LOAD_FPU should be able to
> > replace
On 2018-09-26 13:01:17 [+0200], Peter Zijlstra wrote:
> In particular this ordering ensures a concurrent unlock cannot trigger
> the uncontended handoff. Also it ensures that if the xchg() happens
> after a (successful) trylock, we must observe that LOCKED bit.
so I backported a bunch of atomic fi
On 2018-09-26 07:34:09 [-0700], Andy Lutomirski wrote:
> > So I *think* nobody relies on FPU-emulation anymore. I would suggest to
> > get this patch set into shape and then getting rid of
> > CONFIG_MATH_EMULATION?
Bryan, Denys, does anyone of you rely on CONFIG_MATH_EMULATION?
The manual for Qua
On 2018-09-26 17:08:42 [+0200], Thomas Gleixner wrote:
> Remember that 4.14 took almost 8 hours to fail, so I'd recommend not to
> cheer too much. Let's see what it tells us in 48 hours.
I don't cheer. I merely point out that shuffling the whole atomic code
around did not make a change while apply
On 2018-09-12 15:33:44 [+0200], To linux-kernel@vger.kernel.org wrote:
> There is no user of _TIF_ALLWORK_MASK since commit 21d375b6b34ff
> ("x86/entry/64: Remove the SYSCALL64 fast path").
> Remove unused define _TIF_ALLWORK_MASK.
>
> Signed-off-by: Sebastian Andrzej
On 2018-09-27 16:47:47 [+0200], Thomas Gleixner wrote:
> I wonder if it's just the store on the stack which makes it work. I've seen
> that when instrumenting x86. When the careful instrumentation just stayed
> in registers it failed. Once it was too much and stack got involved it
> vanished away.
On 2018-11-22 17:04:19 [+0800], zhe...@windriver.com wrote:
> From: He Zhe
>
> kmemleak_lock, as a rwlock on RT, can possibly be held in atomic context and
> causes the follow BUG.
>
> BUG: scheduling while atomic: migration/15/132/0x0002
…
> Preemption disabled at:
> [] cpu_stopper_thread+0
On 2018-11-23 12:02:55 [+0100], Andrea Parri wrote:
> > is this an RT-only problem? Because mainline should not allow read->read
> > locking or read->write locking for reader-writer locks. If this only
> > happens on v4.18 and not on v4.19 then something must have fixed it.
>
> Probably misunderst
Commit 75045f77f7a7 ("x86/extable: Introduce _ASM_EXTABLE_UA for uaccess
fixups") made copy_user_to_xregs() -> XSTATE_OP() use _ASM_EXTABLE_UA.
Commit 9da3f2b74054 ("x86/fault: BUG() when uaccess helpers fault on
kernel addresses") then decided that a #GP is not good and has to be
reported loudly.
ss points to userspace memory. Change it back.
The #GP is raised if the xstate content is invalid. But I guess the
details don't matter.
> Reported-by: Sebastian Andrzej Siewior
> Fixes: 75045f77f7a7 ("x86/extable: Introduce _ASM_EXTABLE_UA for uaccess
> fixups")
&
On 2018-11-19 09:02:45 [-0800], Dave Hansen wrote:
> On 11/19/18 8:04 AM, Sebastian Andrzej Siewior wrote:
> > v1…v2: A more verbose commit as message.
>
> I was really hoping for code comments. :)
I though we agreed to make those in the larger series because those
comments in __f
On 2018-11-19 18:27:43 [+0100], Borislav Petkov wrote:
> On Mon, Nov 19, 2018 at 06:11:29PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2018-11-19 09:02:45 [-0800], Dave Hansen wrote:
> > > On 11/19/18 8:04 AM, Sebastian Andrzej Siewior wrote:
> > > > v1…v2: A
On 2018-11-08 12:12:52 [+0100], Paolo Bonzini wrote:
> On 07/11/2018 20:48, Sebastian Andrzej Siewior wrote:
> > index 375226055a413..5b33985d9f475 100644
> > --- a/arch/x86/kernel/fpu/xstate.c
> > +++ b/arch/x86/kernel/fpu/xstate.c
> > @@ -811,7 +811,7 @
lkml.kernel.org/r/20160226074940.ga28...@pd.tnic
Cc: sta...@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior
---
v2…v3: Rewording parts of the commit message as per Borislav Petkov.
v1…v2: A more verbose commit as message.
arch/x86/kernel/fpu/signal.c | 4 ++--
1 file changed, 2 insertions(+),
On 2018-11-22 17:04:19 [+0800], zhe...@windriver.com wrote:
> From: He Zhe
>
> kmemleak_lock, as a rwlock on RT, can possibly be held in atomic context and
> causes the follow BUG.
please use
[PATCH RT … ]
in future while posting for RT. And this was (and is) on my TODO list.
Sebastian
On 2018-11-28 15:27:28 [+], David Laight wrote:
> Better still note it in the code.
I'm in favour of adding something to tools/testing/selftests/x86/.
> David
Sebastian
kernel_fpu_begin() block could set
fpu_fpregs_owner_ctx to NULL but a kernel thread does not use
user_fpu_begin().
This is a leftover from the lazy-FPU time.
Remove user_fpu_begin(), it does not change fpu_fpregs_owner_ctx's
content.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/inclu
6/fpu: Eager switch PKRU state
x86/fpu: Always store the registers in copy_fpstate_to_sigframe()
x86/fpu: Prepare copy_fpstate_to_sigframe() for TIF_NEED_FPU_LOAD
x86/fpu: Defer FPU state load until return to userspace
Sebastian Andrzej Siewior (24):
x86/fpu: Use ULL for shift in xfeature_uncompac
t does and
save a few cycles.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index 42e0f98f34f54..3a59c578440e2 100644
--- a/arch/x86/kernel/fpu/signal.c
+++
Every user of user_insn() passes an user memory pointer to this macro.
Add might_fault() to user_insn() so we can spot users which are using
this macro in sections where page faulting is not allowed.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 1 +
1 file
he last user of __kernel_fpu_{begin|end}(), it can be
made static and not exported anymore.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/efi.h | 6 ++
arch/x86/include/asm/fpu/api.h | 16 ++--
arch/x86/kernel/fpu/core.c | 6 ++
3 files changed, 10 in
There are no users of fpu__restore() so it is time to remove it.
The comment regarding fpu__restore() and TS bit is stale since commit
b3b0870ef3ffe ("i387: do not preload FPU state at task switch time")
and has no meaning since.
Signed-off-by: Sebastian Andrzej Siewior
---
Doc
During the context switch the xstate is loaded which also includes the
PKRU value.
If xstate is restored on return to userland it is required that the
PKRU value in xstate is the same as the one in the CPU.
Save the PKRU in xstate during modification.
Signed-off-by: Sebastian Andrzej Siewior
Start refactoring __fpu__restore_sig() by inlining
copy_user_to_fpregs_zeroing().
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 42
1 file changed, 19 insertions(+), 23 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b
Most users of __raw_xsave_addr() use a feature number, shift it to a
mask and then __raw_xsave_addr() shifts it back to the feature number.
Make __raw_xsave_addr() use the feature number as an argument.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/xstate.c | 22
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/include/asm/thread_info.h
b/arch/x86/include/asm/thread_info.h
index cd6920674b905..1e64222030612 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/
From: Rik van Riel
The FPU registers need only to be saved if TIF_NEED_FPU_LOAD is not set.
Otherwise this has been already done and can be skipped.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 11 ++-
1 file changed, 10
From: Rik van Riel
Add helper function that ensures the floating point registers for
the current task are active. Use with preemption disabled.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/api.h | 11 +++
arch/x86/include/asm
Andrzej Siewior
---
arch/x86/mm/pkeys.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index 05bb9a44eb1c3..50f65fc1b9a3f 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -142,13 +142,6 @@ u32 init_pkru_value = PKRU_AD_KEY( 1
. Before this commit the kernel thread would end up
with a random value which it inherited from the previous user task.
Signed-off-by: Rik van Riel
[bigeasy: save pkru to xstate, no cache, don't use __raw_xsave_addr()]
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/i
Dave Hansen says that the `wrpkru' is more expensive than `rdpkru'. It
has a higher cycle cost and it's also practically a (light) speculation
barrier.
As an optimisation read the current PKRU value and only write the new
one if it is different.
Signed-off-by: Sebastian
n Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/entry/common.c | 8 +++
arch/x86/include/asm/fpu/api.h | 22 +-
arch/x86/include/asm/fpu/internal.h | 27 +---
arch/x86/include/asm/trace/fpu.h| 5 +-
arch/x86/kernel/fpu/core.c
hs and keep the !ia32_fxstate version. Copy only
the user_i387_ia32_struct data structure in the ia32_fxstate.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 162 ++-
1 file changed, 65 insertions(+), 97 deletions(-)
diff --git a/arch
r value and the caller handles it.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 32 ++-
arch/x86/kernel/fpu/signal.c| 62 +++--
2 files changed, 71 insertions(+), 23 deletions(-)
diff --git a/arch/x86/include/asm/fp
ed from an earlier version of the patchset while
there still was lazy-FPU on x86.
Signed-off-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/internal.h | 45 -
arch/x86/kernel/fpu/signal.c| 29 +++
2 files c
ature_mask consistently.
This results in changes to the kvm code as:
feature -> xfeature_mask
index -> xfeature_nr
Suggested-by: Dave Hansen
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu/xstate.h | 4 ++--
arch/x86/kernel/fpu/xstate.c | 23 +++--
n opcode is emulated. It makes the removal of ->initialized easier if
the struct is also initialized in FPU-less case at the same time.
Move fpu__initialize() before the FPU check so it is also performed in
FPU-less case.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/fpu
Since ->initialized is always true for user tasks and kernel threads
don't get this far, we always save the registers directly to userspace.
Remove check for ->initialized because it is always true and remove the
false condition.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86
p save/restore of the FPU
registers.
During the context switch into a kernel thread we don't do anything.
There is no reason to save the FPU state of a kernel thread.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/ia32/ia32_signal.c | 17 +++-
arch/x86/include/asm/fpu/inter
considered okay,
load it. Should something go wrong, return with an error and without
altering the original FPU registers.
The removal of "fpu__initialize()" is a nop because fpu->initialized is
already set for the user task.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/incl
for compacted buffers as long as the
CPU supports it and this what we care about.
Remove the "Note:" which is not accurate.
Suggested-by: Paolo Bonzini
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/xstate.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/x86/
fpu_begin() could also force to save FPU's register after
fpu__initialize() without changing the outcome here.
Remove the preempt_disable() section in fpu__clear(), preemption here
does not hurt.
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/core.c | 2 --
1 file changed,
There is no user of _TIF_ALLWORK_MASK since commit 21d375b6b34ff
("x86/entry/64: Remove the SYSCALL64 fast path").
Remove unused define _TIF_ALLWORK_MASK.
Reviewed-by: Borislav Petkov
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/include/asm/thread_info.h | 8
1 fi
The math_emu.h header files contains the definition of struct
math_emu_info. It is not used in this file.
Remove asm/math_emu.h include.
Reviewed-by: Andy Lutomirski
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/process_32.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a
: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/xstate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index 87a57b7642d36..69d5740ed2546 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/f
The variable init_pkru_value isn't used outside of this file.
Make init_pkru_value static.
Acked-by: Dave Hansen
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/mm/pkeys.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index 6e98e0a7
On 2018-11-28 23:20:35 [+0100], To linux-kernel@vger.kernel.org wrote:
> diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
> index fb16d0da71bca..f552b1d6c6958 100644
> --- a/arch/x86/kernel/fpu/signal.c
> +++ b/arch/x86/kernel/fpu/signal.c
> @@ -292,43 +295,51 @@ static int
he last user of __kernel_fpu_{begin|end}(), it can be
made static and not exported anymore.
Reviewed-by: Rik van Riel
Signed-off-by: Sebastian Andrzej Siewior
---
v1…v2: `carefull' -> `careful'
arch/x86/include/asm/efi.h | 6 ++
arch/x86/include/asm/fpu/api.h | 16 ++--
so I've been testing my FPU patches and noticed that the test
mpx-mini-test_32 [0] fails on 64bit host.
I went back to v4.4 and it didn't pass there fully, it ended with:
| starting mpx bounds table test
| ERROR: siginfo bounds do not match shadow bounds for register 0
But it did way more than it
On 2018-11-28 23:20:06 [+0100], Sebastian Andrzej Siewior wrote:
> This is a refurbished series originally started by by Rik van Riel. The
Could someone please apply patch 1 - 7?
Sebastian
On 2018-11-12 09:48:08 [-0800], Dave Hansen wrote:
> On 11/12/18 7:56 AM, Sebastian Andrzej Siewior wrote:
> > Use local_bh_disable() around the restore sequence to avoid the race. BH
> > needs to be disabled because BH is allowed to run (even with preemption
> > disab
On 2018-11-19 07:04:35 [-0800], Dave Hansen wrote:
>
> Does the local_bh_disable() itself survive?
Not in __fpu__restore_sig(). I do have:
| static inline void __fpregs_changes_begin(void)
| {
|preempt_disable();
|local_bh_disable();
| }
and __fpregs_changes_begin() is introduced
On 2018-11-19 07:08:44 [-0800], Dave Hansen wrote:
> On 11/19/18 7:06 AM, Sebastian Andrzej Siewior wrote:
> > On 2018-11-19 07:04:35 [-0800], Dave Hansen wrote:
> >> Does the local_bh_disable() itself survive?
> > Not in __fpu__restore_sig(). I do have:
sta...@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior
---
v1…v2: A more verbose commit as message.
arch/x86/kernel/fpu/signal.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index 61a949d84dfa5..d
was lazy FPU support.
Link: https://lkml.kernel.org/r/20160226074940.ga28...@pd.tnic
Cc: sta...@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior
---
arch/x86/kernel/fpu/signal.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x8
On 2018-11-29 09:13:04 [-0800], Dave Hansen wrote:
> On 11/29/18 6:17 AM, Sebastian Andrzej Siewior wrote:
> > This is broken since v4.12-rc1. This is known [1] since April this year.
> > Should I send a removal patch for MPX or is someone actually going to
> > fix this? Or do
On 2018-12-05 21:53:37 [+0800], He Zhe wrote:
> For call trace 1:
…
> Since kmemleak would most likely be used to debug in environments where
> we would not expect as great performance as without it, and kfree() has raw
> locks
> in its main path and other debug function paths, I suppose it wouldn
voked with state != TASK_RUNNING.
This isn't a problem since it would be reset to TASK_RUNNING later
anyway and we don't rely on the previous state.
Move the state update to TASK_RUNNING before hrtimer_cancel() so there
are no complains from might_sleep() about wrong state.
Signed-off-by:
On 2018-12-06 21:07:22 [+0100], Borislav Petkov wrote:
> > @@ -314,41 +312,34 @@ static int __fpu__restore_sig(void __user *buf, void
> > __user *buf_fx, int size)
> > * thread's fpu state, reconstruct fxstate from the fsave
> > * header. Validate and sanitize the copied
On 2018-11-24 22:26:46 [+0800], He Zhe wrote:
> On latest v4.19.1-rt3, both of the call traces can be reproduced with kmemleak
> enabied. And none can be reproduced with kmemleak disabled.
okay. So it needs attention.
> On latest mainline tree, none can be reproduced no matter kmemleak is enabled
On 2018-12-01 09:42:38 [+0100], Arnd Bergmann wrote:
> You are right that you can't take (or release) a mutex from interrupt
> context. However, I don't think converting a spinlock to a semaphore
> is going to help here either.
you can acquire a semaphore with a try_lock from interrupts context bu
Dear RT folks!
I'm pleased to announce the v4.19.1-rt3 patch set.
Changes since v4.19.1-rt2:
- A patch to the bcm2835 pinctrl driver to use raw_spinlock_t. Patch
by Lukas Wunner.
- The Atmel TCB timer patch set by Alexandre Belloni has been update
to v7.
- The RCU Kconfig entry
On 2018-11-08 12:38:17 [+0100], Borislav Petkov wrote:
> Or simply BIT_ULL(xfeature_nr).
Yes, why not. Updated. Thanks.
Sebastian
On 2018-11-08 15:57:21 [+0100], Borislav Petkov wrote:
> On Wed, Nov 07, 2018 at 08:48:37PM +0100, Sebastian Andrzej Siewior wrote:
> > This is a preparation for the removal of the ->initialized member in the
> > fpu struct.
> > __fpu__restore_sig() is deactivating the FPU
On 2018-11-08 10:25:24 [-0800], Andy Lutomirski wrote:
> On Wed, Nov 7, 2018 at 11:49 AM Sebastian Andrzej Siewior
> wrote:
> >
> > __fpu__restore_sig() restores the CPU's FPU state directly from
> > userland. If we restore registers on return to userland then we can&
On 2018-11-09 19:52:02 [+0100], Borislav Petkov wrote:
> On Fri, Nov 09, 2018 at 06:35:21PM +0100, Sebastian Andrzej Siewior wrote:
> > fpu__drop() stets ->initialized to 0. As a result the context switch
>
> "... the context switch path landing in switch_fpu_prepare().
On 2018-07-02 17:34:34 [+0800], gengdongjiu wrote:
> The Linux kernel version is v4.1.46, and the preempt_rt patch is
> patch-4.1.46-rt52.patch.
the 4.1 series is no longer supported (neither RT wise nor non-RT,
https://www.kernel.org/category/releases.html). I suggest to move away.
If you notic
On 2018-07-02 19:19:07 [+0800], gengdongjiu wrote:
> Hi Sebastian ,
Hi gengdongjiu,
> > the 4.1 series is no longer supported (neither RT wise nor non-RT,
> > https://www.kernel.org/category/releases.html). I suggest to move away.
> > If you notice this problem now it is hardly a long running proj
Clark showed me this:
| BUG: sleeping function called from invalid context at
kernel/locking/rtmutex.c:974
| in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: systemd
| 5 locks held by systemd/1:
| #0: (sb_writers#7){.+.+}, at: [<(ptrval)>] mnt_want_write+0x1f/0x50
| #1: (&type->i_mut
On 2018-07-09 15:01:54 [-0400], Steven Rostedt wrote:
> > which is the trace_cgroup_rmdir() trace event in cgroup_rmdir(). The
> > trace event invokes cgroup_path() which acquires a spin_lock_t and this
> > is invoked within a preempt_disable()ed section.
>
> Correct. And I wish no trace event to
On 2018-07-09 17:48:54 [-0400], Steven Rostedt wrote:
> From: Steven Rostedt (VMware)
>
> Reported-by: Sebastian Andrzej Siewior
Reported-by: Clark Williams
I just forwarded the report.
> Signed-off-by: Steven Rostedt (VMware)
I am in favour of this change.
Sebastian
On 2018-07-10 20:56:24 [-0400], Steven Rostedt wrote:
> Hi Sebastian,
Hi Steven,
> I'm looking at backporting patches from 4.16-rt and noticed that you
> have:
>
> Revert "x86: Convert mce timer to hrtimer"
> Revert "x86/mce: use swait queue for mce wakeups"
>
> With no explanation to why they w
On 2018-07-03 23:35:39 [+0200], To Tejun Heo wrote:
> On 2018-07-03 13:24:24 [-0700], Tejun Heo wrote:
> > (cc'ing Peter and Ingo for lockdep)
> >
> > Hello, Sebastian.
> Hi Tejun,
>
> > On Tue, Jul 03, 2018 at 06:45:44PM +0200, Sebastian Andrz
On 2018-07-11 09:25:55 [-0400], Steven Rostedt wrote:
> Did you decide to create a local_lock_bh(lock) function? I don't see it.
>
> And should this be backported to 4.14-rt too? You state you saw this in
> 4.16-rt, but did you start doing something different then, or did the
> kernel change?
I w
On 2018-06-26 14:28:26 [-0700], Isaac J. Manjarres wrote:
> Remove CPU ID swapping in stop_two_cpus() so that the
> source CPU's stopper thread is added to the wake queue last,
> so that the source CPU's stopper thread is woken up last,
> ensuring that all other threads that it depends on are woken
On 2018-06-27 08:36:57 [-0400], Steven Rostedt wrote:
> On Wed, Apr 11, 2018 at 09:07:30PM +0200, Sebastian Andrzej Siewior wrote:
> >
> > This already happens:
> > - vmstat_shepherd() does get_online_cpus() and within this block it does
> > queue_delayed_work_on()
2220 prev_prio=120
Peter suggested to open code the new priority so people using tracehook
could get the deadline data out.
Reported-by: Mansky Christian
Fixes: b91473ff6e97 ("sched,tracing: Update trace_sched_pi_setprio()")
Signed-off-by: Sebastian Andrzej Siewior
---
v1…v2:
workaround we disable the might sleep warnings by setting
system_state to SYSTEM_SUSPEND before calling sysdev_suspend() and
restoring it to SYSTEM_RUNNING afer sysdev_resume().
Signed-off-by: Thomas Gleixner
Signed-off-by: Sebastian Andrzej Siewior
---
include/linux/kernel.h |1 +
kernel
From: Thomas Gleixner
Upstream commit '53da1d9456fe7f8 fix ptrace slowness' is nothing more
than a bandaid around the ptrace design trainwreck. It's not a
correctness issue, it's merily a cosmetic bandaid.
Signed-off-by: Thomas Gleixner
Signed-off-by: Sebastian Andrzej
On 2018-05-24 17:07:16 [+0200], Rafael J. Wysocki wrote:
> On Thu, May 24, 2018 at 4:24 PM, Sebastian Andrzej Siewior
> wrote:
> > From: Thomas Gleixner
> >
> > timekeeping suspend/resume calls read_persistent_clock() which takes
> > rtc_lock. That results in might
workaround we disable the might sleep warnings by setting
system_state to SYSTEM_SUSPEND before calling sysdev_suspend() and
restoring it to SYSTEM_RUNNING afer sysdev_resume().
Signed-off-by: Thomas Gleixner
[bigeasy: cover s2idle]
Signed-off-by: Sebastian Andrzej Siewior
---
include/linux
The `s2idle_lock' is acquired during suspend while interrupts are
disabled even on RT. The lock is acquired for short sections only.
Make it a RAW lock which avoids "sleeping while atomic" warnings on RT.
Signed-off-by: Sebastian Andrzej Siewior
---
kernel/power/suspend.c | 14 ++
The `events_lock' is acquired during suspend while interrupts are
disabled even on RT. The lock is taken only for a very brief moment.
Make it a RAW lock which avoids "sleeping while atomic" warnings on RT.
Signed-off-by: Sebastian Andrzej Siewior
---
drivers/base/powe
601 - 700 of 3688 matches
Mail list logo