atomic_check_{read,write} [Suggested by Mark Rutland].
> ---
> include/asm-generic/atomic-instrumented.h | 393 +++---
> scripts/atomic/gen-atomic-instrumented.sh | 17 +-
> 2 files changed, 218 insertions(+), 192 deletions(-)
The script changes and generated code look fin
On Mon, Oct 21, 2019 at 02:07:56PM -0400, Steven Rostedt wrote:
> On Mon, 21 Oct 2019 17:34:19 +0100
> Mark Rutland wrote:
>
> > Architectures may need to perform special initialization of ftrace
> > callsites, and today they do so by special-casing ftrace_make_nop() when
&g
On Fri, Oct 18, 2019 at 09:10:28AM -0700, Sami Tolvanen wrote:
> Don't lose the current task's shadow stack when the CPU is suspended.
>
> Signed-off-by: Sami Tolvanen
> ---
> arch/arm64/mm/proc.S | 6 ++
> 1 file changed, 6 insertions(+)
>
> diff --git a/arch/arm64/mm/proc.S
On Fri, Oct 18, 2019 at 10:35:49AM -0700, Sami Tolvanen wrote:
> On Fri, Oct 18, 2019 at 10:23 AM Mark Rutland wrote:
> > I think scs_save() would better live in assembly in cpu_switch_to(),
> > where we switch the stack and current. It shouldn't matter whether
> >
ftrace_filter;
| echo function > current_tracer;
| modprobe
Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is
selected, we wrap its use along with most of module_init_ftrace_plt()
with ifdeffery rather than using IS_ENABLED().
Signed-off-by: Mark Rutland
Cc: Ard Biesheuvel
Cc: Cat
https://lore.kernel.org/r/20190208150826.44ebc68...@newverein.lst.de
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git
arm64/ftrace-with-regs
Mark Rutland (7):
ftrace: add ftrace_init_nop()
module/ftrace: handle patchable-function-entry
arm64: module: rework special section ha
y use the definition in that case. When
DYNAMIC_FTRACE is not selected, modules shouldn't have this section, so
this removes some redundant work in that case.
I built parisc generic-{32,64}bit_defconfig with DYNAMIC_FTRACE enabled,
and verified that the section made it into the .ko files for modules.
Signed-
Now that we no longer refer to mod->arch.ftrace_trampolines in the body
of ftrace_make_call(), we can use IS_ENABLED() rather than ifdeffery,
and make the code easier to follow. Likewise in ftrace_make_nop().
Let's do so.
Signed-off-by: Mark Rutland
Cc: Ard Biesheuvel
Cc: Catalin Marinas
the module. When
ARM64_MODULE_PLTS is selected, any correctly built module should have
one (and this is assumed by arm64's ftrace PLT code) and the absence of
such a section implies something has gone wrong at build time.
Subsequent patches will make use of the new helper.
Signed-off-by: Mark Rutland
So that assembly code can more easily manipulate the FP (x29) within a
pt_regs, add an S_FP asm-offsets definition.
Signed-off-by: Mark Rutland
Cc: Catalin Marinas
Cc: Will Deacon
---
arch/arm64/kernel/asm-offsets.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/asm
to
intialize a callsite into a disabled state, and is not for disabling a
callsite that has been runtime enabled.
Signed-off-by: Mark Rutland
Cc: Ingo Molnar
Cc: Steven Rostedt
Cc: Torsten Duwe
---
kernel/trace/ftrace.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff
in-context.
Add a aarch64_insn_gen_move_reg() wrapper for this case so that we can
write callers in a more straightforward way.
Signed-off-by: Mark Rutland
Cc: Catalin Marinas
Cc: Will Deacon
---
arch/arm64/include/asm/insn.h | 3 +++
arch/arm64/kernel/insn.c | 13 +
2 files
). A PLT is
allocated for each within modules.
Signed-off-by: Torsten Duwe
[Mark: rework asm, comments, PLTs, initialization, commit message]
Signed-off-by: Mark Rutland
Cc: AKASHI Takahiro
Cc: Amit Daniel Kachhap
Cc: Ard Biesheuvel
Cc: Catalin Marinas
Cc: Josh Poimboeuf
Cc: Julien Thierry
Cc
On Mon, Oct 21, 2019 at 02:40:31PM +0100, Steven Price wrote:
> On 18/10/2019 18:10, Mark Rutland wrote:
> > On Tue, Oct 15, 2019 at 06:56:51PM +0100, Mark Rutland wrote:
> [...]
> >>> +PV_TIME_ST
> >>> += ==
> >&g
On Sat, Oct 19, 2019 at 01:01:35PM +0200, Torsten Duwe wrote:
> Hi Mark!
Hi Torsten!
> On Fri, 18 Oct 2019 18:41:02 +0100 Mark Rutland
> wrote:
>
> > In the process of reworking this I spotted some issues that will get
> > in the way of livepatching. Notably:
>
On Wed, Oct 16, 2019 at 06:58:42PM +0100, Mark Rutland wrote:
> I've just done the core (non-arm64) bits today, and pushed that out:
>
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/ftrace-with-regs
>
> ... I'll fold the remainging bits
On Fri, Oct 18, 2019 at 07:12:52PM +0200, Jann Horn wrote:
> On Fri, Oct 18, 2019 at 6:16 PM Sami Tolvanen wrote:
> > This change implements shadow stack switching, initial SCS set-up,
> > and interrupt shadow stacks for arm64.
> [...]
> > +static inline void scs_save(struct task_struct *tsk)
> >
On Fri, Oct 18, 2019 at 07:24:14PM +0800, Yunfeng Ye wrote:
> In a case like suspend-to-disk, a large number of CPU cores need to be
> shut down. At present, the CPU hotplug operation is serialised, and the
> CPU cores can only be shut down one by one. In this process, if PSCI
> affinity_info()
[adding mm folk]
On Fri, Oct 11, 2019 at 06:20:15PM +0100, Dave Martin wrote:
> On Fri, Oct 11, 2019 at 04:10:29PM +0100, Mark Rutland wrote:
> > On Thu, Oct 10, 2019 at 07:44:33PM +0100, Dave Martin wrote:
> > > +#define arch_validate_prot(prot, addr) arm64_valida
On Fri, Oct 11, 2019 at 06:20:15PM +0100, Dave Martin wrote:
> On Fri, Oct 11, 2019 at 04:10:29PM +0100, Mark Rutland wrote:
> > On Thu, Oct 10, 2019 at 07:44:33PM +0100, Dave Martin wrote:
> > > +#define arch_calc_vm_prot_bits(prot, pkey) arm64_calc_vm_prot_bits(prot)
&g
On Fri, Oct 11, 2019 at 05:42:00PM +0100, Dave Martin wrote:
> On Fri, Oct 11, 2019 at 05:01:13PM +0100, Dave Martin wrote:
> > On Fri, Oct 11, 2019 at 04:44:45PM +0100, Dave Martin wrote:
> > > On Fri, Oct 11, 2019 at 04:40:43PM +0100, Mark Rutland wrote:
> > > >
On Fri, Oct 11, 2019 at 03:47:43PM +0100, Dave Martin wrote:
> On Fri, Oct 11, 2019 at 03:21:58PM +0100, Mark Rutland wrote:
> > On Thu, Oct 10, 2019 at 07:44:39PM +0100, Dave Martin wrote:
> > > Since normal execution of any non-branch instruction resets the
> > > PST
The following commit has been merged into the core/urgent branch of tip:
Commit-ID: b1fc5833357524d5d342737913dbe32ff3557bc5
Gitweb:
https://git.kernel.org/tip/b1fc5833357524d5d342737913dbe32ff3557bc5
Author:Mark Rutland
AuthorDate:Mon, 07 Oct 2019 11:45:36 +01:00
On Wed, Oct 16, 2019 at 01:42:59PM +0200, Jiri Kosina wrote:
> On Wed, 24 Jul 2019, Mark Rutland wrote:
>
> > > > > > So what's the status now? Besides debatable minor style
> > > > > > issues there were no more objections to v8. Would this
> > &
Hi Andrey,
On Wed, Oct 16, 2019 at 03:19:50PM +0300, Andrey Ryabinin wrote:
> On 10/14/19 4:57 PM, Daniel Axtens wrote:
> >>> + /*
> >>> + * Ensure poisoning is visible before the shadow is made visible
> >>> + * to other CPUs.
> >>> + */
> >>> + smp_wmb();
> >>
> >> I'm not quite understand
On Tue, Oct 15, 2019 at 04:21:09PM +0100, Dave Martin wrote:
> On Fri, Oct 11, 2019 at 04:24:53PM +0100, Mark Rutland wrote:
> > On Thu, Oct 10, 2019 at 07:44:37PM +0100, Dave Martin wrote:
> > > Correct skipping of an instruction on AArch32 works a bit
> > > different
ful for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on). It also allows relaxing the module alignment
> back to PAGE_SIZE.
>
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
> Acked-by: Vasily Gorbik
> Sign
could
> lead to undefined behavior in userspace.
>
> The kernel should not used the register when CONFIG_SVE is disabled.
> Therefore, we only need to hidden them from the userspace.
>
> Signed-off-by: Julien Grall
> Fixes: 06a916feca2b ('arm64: Expose SVE2 features for userspace')
On Tue, Oct 15, 2019 at 12:57:44AM +1100, Daniel Axtens wrote:
> Hi Andrey,
>
>
> >> + /*
> >> + * Ensure poisoning is visible before the shadow is made visible
> >> + * to other CPUs.
> >> + */
> >> + smp_wmb();
> >
> > I'm not quite understand what this barrier do and why it needed.
>
On Mon, Oct 14, 2019 at 11:09:40AM +0200, Marco Elver wrote:
> On Mon, 14 Oct 2019 at 10:40, Dmitry Vyukov wrote:
> >
> > On Mon, Oct 14, 2019 at 7:11 AM wrote:
> > >
> > > Hi Dmitry,
> > >
> > > I am from Qualcomm Linux Security Team, just going through KCSAN
> > > and found that there was a
On Tue, Oct 08, 2019 at 10:36:37AM +0200, Peter Zijlstra wrote:
> On Mon, Oct 07, 2019 at 11:45:36AM +0100, Mark Rutland wrote:
> > Both multi_cpu_stop() and set_state() access multi_stop_data::state
> > racily using plain accesses. These are subject to compiler
> > transf
On Fri, Oct 11, 2019 at 04:32:26PM +0100, Dave Martin wrote:
> On Fri, Oct 11, 2019 at 11:25:33AM -0400, Richard Henderson wrote:
> > On 10/11/19 11:10 AM, Mark Rutland wrote:
> > > On Thu, Oct 10, 2019 at 07:44:33PM +0100, Dave Martin wrote:
> > >> @@ -730,6 +730,
On Thu, Oct 10, 2019 at 07:44:37PM +0100, Dave Martin wrote:
> Correct skipping of an instruction on AArch32 works a bit
> differently from AArch64, mainly due to the different CPSR/PSTATE
> semantics.
>
> There have been various attempts to get this right. Currenty
>
On Thu, Oct 10, 2019 at 07:44:33PM +0100, Dave Martin wrote:
> This patch adds the bare minimum required to expose the ARMv8.5
> Branch Target Identification feature to userspace.
>
> By itself, this does _not_ automatically enable BTI for any initial
> executable pages mapped by execve(). This
On Thu, Oct 10, 2019 at 07:44:40PM +0100, Dave Martin wrote:
> Since normal execution of any non-branch instruction resets the
> PSTATE BTYPE field to 0, so do the same thing when emulating a
> trapped instruction.
>
> Branches don't trap directly, so we should never need to assign a
> non-zero
On Thu, Oct 10, 2019 at 07:44:39PM +0100, Dave Martin wrote:
> Since normal execution of any non-branch instruction resets the
> PSTATE BTYPE field to 0, so do the same thing when emulating a
> trapped instruction.
>
> Branches don't trap directly, so we should never need to assign a
> non-zero
On Fri, Oct 11, 2019 at 02:33:43PM +0100, Marc Zyngier wrote:
> On Fri, 11 Oct 2019 11:50:11 +0100
> Mark Rutland wrote:
>
> > Hi,
> >
> > On Fri, Oct 11, 2019 at 11:19:00AM +0530, Sai Prakash Ranjan wrote:
> > > On latest QCOM SoCs like SM8150 an
Hi,
On Fri, Oct 11, 2019 at 11:19:00AM +0530, Sai Prakash Ranjan wrote:
> On latest QCOM SoCs like SM8150 and SC7180 with big.LITTLE arch, below
> warnings are observed during bootup of big cpu cores.
For reference, which CPUs are in those SoCs?
> SM8150:
>
> [0.271177] CPU features:
(i440FX + PIIX, 1996), BIOS 1.12.0-1
04/01/2014
Signed-off-by: Mark Rutland
Cc: Marco Elver
Cc: Thomas Gleixner
Cc: Peter Zijlstra
---
kernel/stop_machine.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index
On Fri, Sep 20, 2019 at 07:51:04PM +0200, Marco Elver wrote:
> On Fri, 20 Sep 2019 at 18:47, Dmitry Vyukov wrote:
> >
> > On Fri, Sep 20, 2019 at 6:31 PM Mark Rutland wrote:
> > >
> > > On Fri, Sep 20, 2019 at 04:18:57PM +0200, Marco Elver wrote:
> > >
d & SNR_IMC_MMIO_BASE_MASK)
| by 23 will throw away the upper 23 bits of the potentially 64-bit
| address. Fix this by casting pci_dword to a resource_size_t before
| masking and shifting it.
|
| Found by coverity ("Unintentional integer overflow").
Otherwise, the patch looks fine to me:
On Tue, Oct 01, 2019 at 03:39:41PM +0100, Julien Grall wrote:
> On 01/10/2019 15:33, Mark Rutland wrote:
> > On Sat, Sep 07, 2019 at 11:05:45AM +0100, Julien Grall wrote:
> > > On 9/6/19 6:20 PM, Andrew Cooper wrote:
> > > > On 06/09/2019 17:00, Arnd Bergmann wrote:
Hi Julien,
On Sat, Sep 07, 2019 at 11:05:45AM +0100, Julien Grall wrote:
> On 9/6/19 6:20 PM, Andrew Cooper wrote:
> > On 06/09/2019 17:00, Arnd Bergmann wrote:
> > > On Fri, Sep 6, 2019 at 5:55 PM Andrew Cooper
> > > wrote:
> > > > On 06/09/2019 16:39, Arnd Bergmann wrote:
> > > > >
On Fri, Sep 20, 2019 at 04:18:57PM +0200, Marco Elver wrote:
> Hi all,
Hi,
> We would like to share a new data-race detector for the Linux kernel:
> Kernel Concurrency Sanitizer (KCSAN) --
> https://github.com/google/ktsan/wiki/KCSAN (Details:
>
Hi Thomas,
As a heads-up, I'm going to be away next week, and I likely won't have
the chance to look at this in detail before October.
On Thu, Sep 19, 2019 at 05:03:14PM +0200, Thomas Gleixner wrote:
> When working on a way to move out the posix cpu timer expiry out of the
> timer interrupt
On Thu, Sep 12, 2019 at 02:11:44PM +0100, Will Deacon wrote:
> On Wed, Sep 11, 2019 at 04:15:46PM +0100, Mark Rutland wrote:
> > On Tue, Sep 10, 2019 at 03:40:44PM -0700, Sami Tolvanen wrote:
> > > Define a weak function in COND_SYSCALL instead of a weak alias to
> > >
: Sami Tolvanen
This looks correct to me, builds fine, and I asume has been tested, so FWIW:
Acked-by: Mark Rutland
In looking at this, I came to the conclusion that we can drop the ifdeffery
around our SYSCALL_DEFINE0(), COND_SYSCALL(), and SYS_NI(), which I evidently
cargo-culted from x86 (wher
On Tue, Sep 03, 2019 at 12:32:49AM +1000, Daniel Axtens wrote:
> Hi Mark,
>
> >> +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> >> + void *unused)
> >> +{
> >> + unsigned long page;
> >> +
> >> + page = (unsigned
ful for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on). It also allows relaxing the module alignment
> back to PAGE_SIZE.
>
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
> Acked-by: Vasily Gorbik
> Sign
On Fri, Aug 30, 2019 at 03:23:53PM -0400, Andrew F. Davis wrote:
> On 8/29/19 5:47 AM, Mark Rutland wrote:
> > On Wed, Aug 28, 2019 at 01:33:18PM -0400, Andrew F. Davis wrote:
> We are seeing is a write-back from L3 cache. Our bootloader writes the
> kernel image with caches
Hi Andrew,
On Wed, Aug 28, 2019 at 01:33:18PM -0400, Andrew F. Davis wrote:
> The exception level in which the kernel was entered needs to be saved for
> later. We do this by writing the exception level to memory. As this data
> is written with the MMU/cache off it will bypass any cache, after
Hi Andrew,
On Thu, Aug 22, 2019 at 04:48:57PM -0700, Andrew Morton wrote:
> On Mon, 19 Aug 2019 17:14:49 +0100 Mark Rutland wrote:
>
> > In several places we need to be able to operate on pointers which have
> > gone via a roundtrip:
> >
> >
/
Signed-off-by: Mark Rutland
Reviewed-by: Andrey Konovalov
Tested-by: Andrey Konovalov
Acked-by: Andrey Ryabinin
Cc: Alexander Potapenko
Cc: Andrew Morton
Cc: Dmitry Vyukov
Cc: Will Deacon
---
lib/test_kasan.c | 41 +
1 file changed, 41 insertions(+)
this more robust by including
| explciitly.
... and with that, my Acked-by stands.
Thanks,
Mark.
> Acked-by: Mark Rutland
> Signed-off-by: Raphael Gault
> ---
> arch/arm64/kernel/perf_event.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/kernel/perf
On Tue, Aug 20, 2019 at 04:55:24PM +0100, Raphael Gault wrote:
> Hi Mark,
>
> Thank you for your comments.
>
> On 8/20/19 4:49 PM, Mark Rutland wrote:
> > On Tue, Aug 20, 2019 at 04:23:17PM +0100, Mark Rutland wrote:
> > > Hi Raphael,
> > >
> &g
On Tue, Aug 20, 2019 at 04:23:17PM +0100, Mark Rutland wrote:
> Hi Raphael,
>
> On Fri, Aug 16, 2019 at 01:59:31PM +0100, Raphael Gault wrote:
> > This feature is required in order to enable PMU counters direct
> > access from userspace only when the system is homogeneous.
&g
On Fri, Aug 16, 2019 at 01:59:32PM +0100, Raphael Gault wrote:
> In order to be able to access the counter directly for userspace,
> we need to provide the index of the counter using the userpage.
> We thus need to override the event_idx function to retrieve and
> convert the perf_event index to
Hi Raphael,
On Fri, Aug 16, 2019 at 01:59:31PM +0100, Raphael Gault wrote:
> This feature is required in order to enable PMU counters direct
> access from userspace only when the system is homogeneous.
> This feature checks the model of each CPU brought online and compares it
> to the boot CPU.
Hi,
I found that when I enable both KCOV and UBSAN on arm64, clang fails to
emit any __sanitizer_cov_trace_*() calls in the resulting binary,
rendering KCOV useless.
For example, when building v5.3-rc3's arch/arm64/kernel/setup.o:
* With defconfig + CONFIG KCOV:
clang
/
Signed-off-by: Mark Rutland
Reviewed-by: Andrey Konovalov
Tested-by: Andrey Konovalov
Cc: Alexander Potapenko
Cc: Andrew Morton
Cc: Andrey Ryabinin
Cc: Dmitry Vyukov
Cc: Will Deacon
---
lib/test_kasan.c | 40
1 file changed, 40 insertions(+)
On Mon, Aug 19, 2019 at 05:37:36PM +0200, Andrey Konovalov wrote:
> On Mon, Aug 19, 2019 at 5:03 PM Mark Rutland wrote:
> >
> > On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote:
> > > On Mon, Aug 19, 2019 at 3:34 PM Will Deacon wrote:
> > > >
&
On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote:
> On Mon, Aug 19, 2019 at 3:34 PM Will Deacon wrote:
> >
> > On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote:
> > > On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> > >
On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote:
> On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote:
> > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt address,
> > but it will modify pointer tag into 0xff, so there is a false positive.
> >
> > When enable
Hi,
On Mon, Aug 19, 2019 at 11:35:27AM +, Jisheng Zhang wrote:
> Implement KPROBES_ON_FTRACE for arm64.
It would be very helpful if the cover letter could explain what
KPROBES_ON_FTRACE is, and why it is wanted.
It's not clear to me whether this is enabling new functionality for
kprobes via
On Fri, Aug 16, 2019 at 10:41:00AM -0700, Andy Lutomirski wrote:
> On Fri, Aug 16, 2019 at 10:08 AM Mark Rutland wrote:
> >
> > Hi Christophe,
> >
> > On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote:
> > > Le 15/08/2019 à 02:16, Daniel Axtens
Hi Christophe,
On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote:
> Le 15/08/2019 à 02:16, Daniel Axtens a écrit :
> > Hook into vmalloc and vmap, and dynamically allocate real shadow
> > memory to back the mappings.
> >
> > Most mappings in vmalloc space are small, requiring less
exhaustion after a week of Syzkaller
fuzzing with the last patchset, across 3 machines, so that sounds fine
to me.
Otherwise, this looks good to me now! For the x86 and fork patch, feel
free to add:
Acked-by: Mark Rutland
Mark.
>
> v1: https://lore.kernel.org/linux-mm/20190725055503.19507
On Wed, Aug 14, 2019 at 03:07:08PM +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-14 11:41:30 [+0100], Mark Rutland wrote:
> > Per commit:
> >
> > 0cecca9d03c964ab ("x86/fpu: Eager switch PKRU state")
> >
> > ... switch_fpu_state() is trying to
On Wed, Aug 14, 2019 at 02:39:19PM +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-14 11:41:24 [+0100], Mark Rutland wrote:
> …
> > Instances checking multiple PF_* flags at ocne are left as-is for now.
>
> s@ocne@once@
>
> Acked-by: Sebastian Andrzej Siewior
Wh
On Wed, Aug 14, 2019 at 01:26:43PM +0200, Geert Uytterhoeven wrote:
> Hi Mark,
>
> On Wed, Aug 14, 2019 at 12:43 PM Mark Rutland wrote:
> > Code checking whether a task is a kthread isn't very consistent. Some
> > code correctly tests task->flags & PF_THREAD, while ot
erable to use is_kthread() to determine whether a thread is a
kthread.
For consistency, let's use is_kthread() here.
Signed-off-by: Mark Rutland
Cc: David S. Miller
Cc: Ingo Molnar
Cc: Peter Zijlstra
---
arch/sparc/kernel/perf_event.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
In general, a non-NULL current->mm doesn't imply that current is a
kthread, as kthreads can install an mm via use_mm(), and so it's
preferable to use is_kthread() to determine whether a thread is a
kthread.
For consistency, let's use is_kthread() here.
Signed-off-by: Mark Rutland
Cc: Cata
mm doesn't imply that current is a
kthread, as kthreads can install an mm via use_mm(), and so it's
preferable to use is_kthread() to determine whether a thread is a
kthread.
For consistency, let's use is_kthread() here.
Signed-off-by: Mark Rutland
Cc: Catalin Marinas
Cc: Ingo Molnar
Cc: Pe
to use is_kthread() to determine whether a thread is a
kthread.
For consistency, let's use is_kthread() here.
Signed-off-by: Mark Rutland
Cc: Andi Kleen
Cc: Ingo Molnar
Cc: Kan Liang
Cc: Peter Zijlstra
---
arch/x86/events/intel/lbr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
ge as a result of this patch.
Signed-off-by: Mark Rutland
Cc: Ingo Molnar
Cc: Peter Zijlstra
---
arch/alpha/kernel/process.c | 2 +-
arch/arc/kernel/process.c| 2 +-
arch/arm/kernel/process.c| 2 +-
arch/arm/mm/init.c | 2 +-
arch/arm64/kernel/process.c | 4
In general, a non-NULL current->mm doesn't imply that current is a
kthread, as kthreads can install an mm via use_mm(), and so it's
preferable to use is_kthread() to determine whether a thread is a
kthread.
For consistency, let's use is_kthread() here.
Signed-off-by: Mark Rutland
Cc: I
erable to use is_kthread() to determine whether a thread is a
kthread.
For consistency, let's use is_kthread() here.
Signed-off-by: Mark Rutland
Cc: Will Deacon
Cc: Ingo Molnar
Cc: Peter Zijlstra
---
arch/arm/kernel/perf_callchain.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
mm doesn't imply that current is a
kthread, as kthreads can install an mm via use_mm(), and so it's
preferable to use is_kthread() to determine whether a thread is a
kthread.
For consistency, let's use is_kthread() here.
Signed-off-by: Mark Rutland
Cc: Borislav Petkov
Cc: Christoph Hellwig
Cc: Ingo M
ons return bool.
Signed-off-by: Mark Rutland
Cc: Ingo Molnar
Cc: Peter Zijlstra
---
include/linux/sched.h | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 9f51932bd543..b7e96409d75f 100644
--- a/include/linux/sch
1,2].
Thanks,
Mark.
[1] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git
sched/kthread-cleanup
[2]
https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=sched/kthread-cleanup
Mark Rutland (9):
sched/core: add is_kthread() helper
sched: treewide: use is_kthr
Commit-ID: 41b57d1bb8a4084e651c1f9a754fca64952666a0
Gitweb: https://git.kernel.org/tip/41b57d1bb8a4084e651c1f9a754fca64952666a0
Author: Mark Rutland
AuthorDate: Tue, 6 Aug 2019 17:25:39 +0100
Committer: Borislav Petkov
CommitDate: Wed, 14 Aug 2019 09:48:58 +0200
lib: Remove redundant
On Tue, Aug 13, 2019 at 05:47:40PM +0200, Christoph Hellwig wrote:
> RISC-V has the concept of a cpu level interrupt controller. Part of it
> is expose as bits in the status registers, and 2 new CSRs per privilege
> level in the instruction set, but the machanisms to trigger IPIs and
> timer
ild failure [1].
>
> [1] https://lore.kernel.org/linux-mm/201908131204.b910fkl1%25...@intel.com/
This looks sane to me, so FWIW:
Acked-by: Mark Rutland
I guess Andrew will pick this up and fix up the conflict?
Thanks,
Mark.
>
> arch/microblaze/include/asm/pgalloc.h | 39
>
On Fri, Aug 09, 2019 at 01:53:55PM +0100, Joe Burmeister wrote:
> Many, though not all, AT25s have an instruction for chip erase.
> If there is one in the datasheet, it can be added to device tree.
> Erase can then be done in userspace via the sysfs API with a new
> "erase" device attribute. This
On Thu, Aug 08, 2019 at 02:50:37PM +0100, Mark Rutland wrote:
> From looking at this for a while, there are a few more things we should
> sort out:
> * We can use the split pmd locks (used by both x86 and arm64) to
> minimize contention on the init_mm ptl. As apply_to_page_range()
On Fri, Aug 09, 2019 at 03:16:33AM -0700, Matthew Wilcox wrote:
> On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
> > Should alloc_gigantic_page() be made available as an interface for general
> > use in the kernel. The test module here uses very similar implementation
> > from
On Thu, Aug 08, 2019 at 06:43:25PM +0100, Mark Rutland wrote:
> On Thu, Aug 08, 2019 at 02:50:37PM +0100, Mark Rutland wrote:
> > Hi Daniel,
> >
> > This is looking really good!
> >
> > I spotted a few more things we need to deal with, so I've suggested some
&
ange in the future, and we will need to deal with that
> + * if it were to happen.
> + */
> +#define ARCH_RET_ADDR_AFTER_LOCAL_VARS 1
FWIW (with whatever this got renamed to):
Acked-by: Mark Rutland
Thanks,
Mark.
On Thu, Aug 08, 2019 at 06:18:39PM -0400, Qian Cai wrote:
> > On Aug 8, 2019, at 6:38 AM, Mark Rutland wrote:
> >
> > On Wed, Aug 07, 2019 at 11:29:16PM -0400, Qian Cai wrote:
> >> The commit 155433cb365e ("arm64: cache: Remove support for ASID-tagged
>
On Thu, Aug 08, 2019 at 10:09:16AM -0700, Nathan Chancellor wrote:
> On Thu, Aug 08, 2019 at 11:38:08AM +0100, Mark Rutland wrote:
> > On Wed, Aug 07, 2019 at 11:29:16PM -0400, Qian Cai wrote:
> > > The commit 155433cb365e ("arm64: cache: Remove support for ASID-tag
On Thu, Aug 08, 2019 at 02:50:37PM +0100, Mark Rutland wrote:
> Hi Daniel,
>
> This is looking really good!
>
> I spotted a few more things we need to deal with, so I've suggested some
> (not even compile-tested) code for that below. Mostly that's just error
> handli
Hi Daniel,
This is looking really good!
I spotted a few more things we need to deal with, so I've suggested some
(not even compile-tested) code for that below. Mostly that's just error
handling, and using helpers to avoid things getting too verbose.
On Wed, Jul 31, 2019 at 05:15:48PM +1000,
On Wed, Aug 07, 2019 at 11:29:16PM -0400, Qian Cai wrote:
> The commit 155433cb365e ("arm64: cache: Remove support for ASID-tagged
> VIVT I-caches") introduced some compiation warnings from GCC (and
> Clang) with -Winitializer-overrides),
>
> arch/arm64/kernel/cpuinfo.c:38:26: warning:
Hi Steve,
On Wed, Aug 07, 2019 at 12:34:01PM -0400, Steven Rostedt wrote:
> As arm64 saves the link register after a function's local variables are
> stored, it causes the max stack tracer to be off by one in its output
> of which function has the bloated stack frame.
For reference, it's a bit
On Tue, Aug 06, 2019 at 03:34:34PM -0400, Qian Cai wrote:
> The commit 155433cb365e ("arm64: cache: Remove support for ASID-tagged
> VIVT I-caches") introduced some compiation warnings from GCC (and
> Clang),
>
> arch/arm64/kernel/cpuinfo.c:38:26: warning: initialized field
> overwritten
Signed-off-by: Mark Rutland
Cc: Andrew Morton
Cc: Borislav Petkov
Cc: Gary R Hook
Cc: Ingo Molnar
---
lib/Makefile | 4
1 file changed, 4 deletions(-)
I've verified this atop of v5.3-rc3, where the Makefile removes all of
CC_FLAGS_FTRACE (containing "-pg -mrecord-mcount -mfentry&qu
gt; Data abort info:
> ISV = 0, ISS = 0x0046
> CM = 0, WnR = 1
>
> After:
> Unable to handle kernel paging request at virtual address c000
> Mem abort info:
> ESR = 0x9646
> EC = 0x25, Exception class = DABT (current EL), IL = 32 bits
> SET =
On Mon, Aug 05, 2019 at 11:50:03PM -0400, Qian Cai wrote:
>
>
> > On Aug 5, 2019, at 10:01 AM, Will Deacon wrote:
> >
> > On Mon, Aug 05, 2019 at 07:47:37AM -0400, Qian Cai wrote:
> >>
> >>
> >>> On Aug 5, 2019, at 5:52 AM, Will Deacon wrote:
> >>>
> >>> On Fri, Aug 02, 2019 at 11:32:24AM
12 /* syscall emulation active */
> #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
> #define TIF_FREEZE 19
> #define TIF_RESTORE_SIGMASK 20
FWIW this looks sane to me, so:
Acked-by: Mark Rutland
Mark.
> --
> 2.17.1
>
Hi,
If you have a patch affecting arm64, please Cc LAKML and the arm64
maintainers. I've added them to this sub-thread.
On Wed, Jul 31, 2019 at 05:04:37PM +0800, Jiping Ma wrote:
> The PC of one the frame is matched to the next frame function, rather
> than the function of his frame.
As Steve
401 - 500 of 8691 matches
Mail list logo