Re: [PATCH 2/2] powerpc: support KASAN instrumentation of bitops

2019-08-19 Thread Daniel Axtens
Christophe Leroy writes: > Le 19/08/2019 à 08:28, Daniel Axtens a écrit : >> In KASAN development I noticed that the powerpc-specific bitops >> were not being picked up by the KASAN test suite. > > I'm not sure anybody cares about who noticed the problem. This sentenc

Re: powerpc asm-prototypes.h seems odd

2019-08-27 Thread Daniel Axtens
exists to support sparse, squashing a bunch of sparse warnings. >From the commit where I introduced it: commit 42f5b4cacd783faf05e3ff8bf85e8be31f3dfa9d Author: Daniel Axtens Date: Wed May 18 11:16:50 2016 +1000 powerpc: Introduce asm-prototypes.h Sparse picked up a number of function

Re: [PATCH v5 1/2] powerpc/xmon: Allow listing and clearing breakpoints in read-only mode

2019-08-28 Thread Daniel Axtens
Hi Chris, > Read-only mode should not prevent listing and clearing any active > breakpoints. I tested this and it works for me: Tested-by: Daniel Axtens > + if (xmon_is_ro || !scanhex(&a)) { It took me a while to figure out what this line does: as I understand it, th

Re: [PATCH v5 2/2] powerpc/xmon: Restrict when kernel is locked down

2019-08-29 Thread Daniel Axtens
Hi, > Xmon should be either fully or partially disabled depending on the > kernel lockdown state. I've been kicking the tyres of this, and it seems to work well: Tested-by: Daniel Axtens > > Put xmon into read-only mode for lockdown=integrity and prevent user > entry int

[PATCH v5 0/5] kasan: support backing vmalloc space with real shadow memory

2019-08-29 Thread Daniel Axtens
he vmap book-keeping - Split out the test into a separate patch - Optional patch to track the number of pages allocated - minor checkpatch cleanups Daniel Axtens (5): kasan: support backing vmalloc space with real shadow memory kasan: add test for vmalloc fork: support VMAP_STACK with

[PATCH v5 1/5] kasan: support backing vmalloc space with real shadow memory

2019-08-29 Thread Daniel Axtens
. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 Acked-by: Vasily Gorbik Signed-off-by: Daniel Axtens [Mark: rework shadow allocation] Signed-off-by: Mark Rutland -- v2: let kasan_unpoison_shadow deal with ranges that do not use a full shadow byte. v3: relax module alignment

[PATCH v5 2/5] kasan: add test for vmalloc

2019-08-29 Thread Daniel Axtens
Test kasan vmalloc support by adding a new test to the module. Signed-off-by: Daniel Axtens -- v5: split out per Christophe Leroy --- lib/test_kasan.c | 26 ++ 1 file changed, 26 insertions(+) diff --git a/lib/test_kasan.c b/lib/test_kasan.c index 49cc4d570a40

[PATCH v5 3/5] fork: support VMAP_STACK with KASAN_VMALLOC

2019-08-29 Thread Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward: - clear the shadow region of vmapped stacks when swapping them in - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN Reviewed-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- arch/Kconfig | 9 + kernel

[PATCH v5 4/5] x86/kasan: support KASAN_VMALLOC

2019-08-29 Thread Daniel Axtens
illed dynamically. Acked-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- v5: fix some checkpatch CHECK warnings. There are some that remain around lines ending with '(': I have not changed these because it's consistent with the rest of the file and it's not easy

[PATCH v5 5/5] kasan debug: track pages allocated for vmalloc shadow

2019-08-29 Thread Daniel Axtens
Provide the current number of vmalloc shadow pages in /sys/kernel/debug/kasan_vmalloc/shadow_pages. Signed-off-by: Daniel Axtens --- Merging this is probably overkill, but I leave it to the discretion of the broader community. On v4 (no dynamic freeing), I saw the following approximate

Re: [PATCH v2 1/2] kasan: support instrumented bitops combined with generic bitops

2019-08-29 Thread Daniel Axtens
Daniel Axtens writes: > Currently bitops-instrumented.h assumes that the architecture provides > atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit). > This is true on x86 and s390, but is not always true: there is a > generic bitops/non-atomic.h header that prov

Re: [PATCH v5 1/5] kasan: support backing vmalloc space with real shadow memory

2019-08-30 Thread Daniel Axtens
Hi all, > +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, > + void *unused) > +{ > + unsigned long page; > + > + page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT); > + > + spin_lock(&init_mm.page_table_lock); > + > +

[PATCH v6 0/5] kasan: support backing vmalloc space with real shadow memory

2019-09-02 Thread Daniel Axtens
it messages and docs - Dynamically free unused shadow pages by hooking into the vmap book-keeping - Split out the test into a separate patch - Optional patch to track the number of pages allocated - minor checkpatch cleanups v6: Properly guard freeing pages in patch 1, drop debugging code. Daniel

[PATCH v6 1/5] kasan: support backing vmalloc space with real shadow memory

2019-09-02 Thread Daniel Axtens
. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 Acked-by: Vasily Gorbik Signed-off-by: Daniel Axtens [Mark: rework shadow allocation] Signed-off-by: Mark Rutland -- v2: let kasan_unpoison_shadow deal with ranges that do not use a full shadow byte. v3: relax module alignment

[PATCH v6 2/5] kasan: add test for vmalloc

2019-09-02 Thread Daniel Axtens
Test kasan vmalloc support by adding a new test to the module. Signed-off-by: Daniel Axtens -- v5: split out per Christophe Leroy --- lib/test_kasan.c | 26 ++ 1 file changed, 26 insertions(+) diff --git a/lib/test_kasan.c b/lib/test_kasan.c index 49cc4d570a40

[PATCH v6 3/5] fork: support VMAP_STACK with KASAN_VMALLOC

2019-09-02 Thread Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward: - clear the shadow region of vmapped stacks when swapping them in - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN Reviewed-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- arch/Kconfig | 9 + kernel

[PATCH v6 4/5] x86/kasan: support KASAN_VMALLOC

2019-09-02 Thread Daniel Axtens
illed dynamically. Acked-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- v5: fix some checkpatch CHECK warnings. There are some that remain around lines ending with '(': I have not changed these because it's consistent with the rest of the file and it's not easy

[PATCH v6 5/5] kasan debug: track pages allocated for vmalloc shadow

2019-09-02 Thread Daniel Axtens
Provide the current number of vmalloc shadow pages in /sys/kernel/debug/kasan_vmalloc/shadow_pages. Signed-off-by: Daniel Axtens --- Merging this is probably overkill, but I leave it to the discretion of the broader community. On v4 (no dynamic freeing), I saw the following approximate

Re: [PATCH v6 1/5] kasan: support backing vmalloc space with real shadow memory

2019-09-02 Thread Daniel Axtens
Hi Mark, >> +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, >> +void *unused) >> +{ >> +unsigned long page; >> + >> +page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT); >> + >> +spin_lock(&init_mm.page_table_lock); >>

[PATCH v7 0/5] kasan: support backing vmalloc space with real shadow memory

2019-09-03 Thread Daniel Axtens
net/ Properly guard freeing pages in patch 1, drop debugging code. v7: Add a TLB flush on freeing, thanks Mark Rutland. Explain more clearly how I think freeing is concurrency-safe. Daniel Axtens (5): kasan: support backing vmalloc space with real shadow memory kasan: add test for vma

[PATCH v7 1/5] kasan: support backing vmalloc space with real shadow memory

2019-09-03 Thread Daniel Axtens
. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 Acked-by: Vasily Gorbik Signed-off-by: Daniel Axtens [Mark: rework shadow allocation] Signed-off-by: Mark Rutland -- v2: let kasan_unpoison_shadow deal with ranges that do not use a full shadow byte. v3: relax module alignment

[PATCH v7 2/5] kasan: add test for vmalloc

2019-09-03 Thread Daniel Axtens
Test kasan vmalloc support by adding a new test to the module. Signed-off-by: Daniel Axtens -- v5: split out per Christophe Leroy --- lib/test_kasan.c | 26 ++ 1 file changed, 26 insertions(+) diff --git a/lib/test_kasan.c b/lib/test_kasan.c index 49cc4d570a40

[PATCH v7 3/5] fork: support VMAP_STACK with KASAN_VMALLOC

2019-09-03 Thread Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward: - clear the shadow region of vmapped stacks when swapping them in - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN Reviewed-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- arch/Kconfig | 9 + kernel

[PATCH v7 4/5] x86/kasan: support KASAN_VMALLOC

2019-09-03 Thread Daniel Axtens
illed dynamically. Acked-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- v5: fix some checkpatch CHECK warnings. There are some that remain around lines ending with '(': I have not changed these because it's consistent with the rest of the file and it's not easy

[PATCH v7 5/5] kasan debug: track pages allocated for vmalloc shadow

2019-09-03 Thread Daniel Axtens
Provide the current number of vmalloc shadow pages in /sys/kernel/debug/kasan_vmalloc/shadow_pages. Signed-off-by: Daniel Axtens --- Merging this is probably overkill, but I leave it to the discretion of the broader community. On v4 (no dynamic freeing), I saw the following approximate

Re: [PATCH v7 5/5] kasan debug: track pages allocated for vmalloc shadow

2019-09-03 Thread Daniel Axtens
Andrey Konovalov writes: > On Tue, Sep 3, 2019 at 4:56 PM Daniel Axtens wrote: >> >> Provide the current number of vmalloc shadow pages in >> /sys/kernel/debug/kasan_vmalloc/shadow_pages. > > Maybe it makes sense to put this into /sys/kernel/debug/kasan/ > (

Re: [PATCH v7 0/5] kasan: support backing vmalloc space with real shadow memory

2019-09-11 Thread Daniel Axtens
Hi Christophe, > Are any other patches required prior to this series ? I have tried to > apply it on later powerpc/merge branch without success: It applies on the latest linux-next. I didn't base it on powerpc/* because it's generic. Regards, Daniel

Re: [PATCH v7 0/2] Restrict xmon when kernel is locked down

2019-09-17 Thread Daniel Axtens
Hi, So Matthew Garrett and I talked about this at Linux Plumbers. Matthew, if I understood correctly, your concern was that this doesn't sit well with the existing threat model for lockdown. As I understand it, the idea is that if you're able to get access to the physical console, you're already a

[PATCH v8 0/5] kasan: support backing vmalloc space with real shadow memory

2019-09-30 Thread Daniel Axtens
hadow_pages to kasan/vmalloc_shadow_pages Daniel Axtens (5): kasan: support backing vmalloc space with real shadow memory kasan: add test for vmalloc fork: support VMAP_STACK with KASAN_VMALLOC x86/kasan: support KASAN_VMALLOC kasan debug: track pages allocated for vmalloc shadow Documentation/dev-t

[PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-01 Thread Daniel Axtens
. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 Acked-by: Vasily Gorbik Signed-off-by: Daniel Axtens [Mark: rework shadow allocation] Signed-off-by: Mark Rutland -- v2: let kasan_unpoison_shadow deal with ranges that do not use a full shadow byte. v3: relax module alignment

[PATCH v8 2/5] kasan: add test for vmalloc

2019-10-01 Thread Daniel Axtens
Test kasan vmalloc support by adding a new test to the module. Signed-off-by: Daniel Axtens -- v5: split out per Christophe Leroy --- lib/test_kasan.c | 26 ++ 1 file changed, 26 insertions(+) diff --git a/lib/test_kasan.c b/lib/test_kasan.c index 49cc4d570a40

[PATCH v8 3/5] fork: support VMAP_STACK with KASAN_VMALLOC

2019-10-01 Thread Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward: - clear the shadow region of vmapped stacks when swapping them in - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN Reviewed-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- arch/Kconfig | 9 + kernel

[PATCH v8 4/5] x86/kasan: support KASAN_VMALLOC

2019-10-01 Thread Daniel Axtens
illed dynamically. Acked-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- v5: fix some checkpatch CHECK warnings. There are some that remain around lines ending with '(': I have not changed these because it's consistent with the rest of the file and it's not easy

[PATCH v8 5/5] kasan debug: track pages allocated for vmalloc shadow

2019-10-01 Thread Daniel Axtens
Provide the current number of vmalloc shadow pages in /sys/kernel/debug/kasan/vmalloc_shadow_pages. Signed-off-by: Daniel Axtens --- v8: rename kasan_vmalloc/shadow_pages -> kasan/vmalloc_shadow_pages On v4 (no dynamic freeing), I saw the following approximate figures on my test VM: - fr

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-01 Thread Daniel Axtens
Hi, >> /* >> * Find a place in the tree where VA potentially will be >> * inserted, unless it is merged with its sibling/siblings. >> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va, >> if (sibling->va_end == va->va_start) { >> si

Re: [PATCH v3 3/4] powerpc/mm/ptdump: debugfs handler for W+X checks at runtime

2019-10-07 Thread Daniel Axtens
Russell Currey writes: > Very rudimentary, just > > echo 1 > [debugfs]/check_wx_pages > > and check the kernel log. Useful for testing strict module RWX. I was very confused that this requires the boot-time testing to be enabled to appear in debugfs. Could you change the kconfig snippet f

Re: [PATCH 0/2] ocxl: Move SPA and TL definitions

2019-10-09 Thread Daniel Axtens
Hi Christophe, As well as the checkpatch warnings noted on Patchwork (https://patchwork.ozlabs.org/patch/1173804/ and https://patchwork.ozlabs.org/patch/1173805/), I noticed: Applying: powerpc/powernv: ocxl move SPA definition .git/rebase-apply/patch:405: new blank line at EOF.

Re: [PATCH v3 4/4] powerpc: Enable STRICT_MODULE_RWX

2019-10-10 Thread Daniel Axtens
Hi Russell, Tested-by: Daniel Axtens # e6500 Because ptdump isn't quite working on book3e 64bit atm, I hacked it up to print the raw PTE and the extracted flags. After loading a module, I see the supervisor write bit set without module RWX, and it cleared with module RWX. Modules still se

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-10 Thread Daniel Axtens
Hi Uladzislau, > Looking at it one more, i think above part of code is a bit wrong > and should be separated from merge_or_add_vmap_area() logic. The > reason is to keep it simple and do only what it is supposed to do: > merging or adding. > > Also the kasan_release_vmalloc() gets called twice th

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-14 Thread Daniel Axtens
Hi Andrey, >> +/* >> + * Ensure poisoning is visible before the shadow is made visible >> + * to other CPUs. >> + */ >> +smp_wmb(); > > I'm not quite understand what this barrier do and why it needed. > And if it's really needed there should be a pairing barrier > on the other

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-14 Thread Daniel Axtens
Mark Rutland writes: > On Tue, Oct 01, 2019 at 04:58:30PM +1000, Daniel Axtens wrote: >> Hook into vmalloc and vmap, and dynamically allocate real shadow >> memory to back the mappings. >> >> Most mappings in vmalloc space are small, requiring less than a f

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-14 Thread Daniel Axtens
>>> @@ -2497,6 +2533,9 @@ void *__vmalloc_node_range(unsigned long size, >>> unsigned long align, >>> if (!addr) >>> return NULL; >>> >>> + if (kasan_populate_vmalloc(real_size, area)) >>> + return NULL; >>> + >> >> KASAN itself uses __vmalloc_node_range() to allocate

Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-14 Thread Daniel Axtens
> There is a potential problem here, as Will Deacon wrote up at: > > > https://lore.kernel.org/linux-arm-kernel/20190827131818.14724-1-w...@kernel.org/ > > ... in the section starting: > > | *** Other architecture maintainers -- start here! *** > > ... whereby the CPU can spuriously fault on a

[PATCH v9 0/5] kasan: support backing vmalloc space with real shadow memory

2019-10-16 Thread Daniel Axtens
0191001065834.8880-1-...@axtens.net/ rename kasan_vmalloc/shadow_pages to kasan/vmalloc_shadow_pages v9: address a number of review comments for patch 1. Daniel Axtens (5): kasan: support backing vmalloc space with real shadow memory kasan: add test for vmalloc fork: support VMAP_STACK with

[PATCH v9 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-16 Thread Daniel Axtens
. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 Acked-by: Vasily Gorbik Co-developed-by: Mark Rutland Signed-off-by: Mark Rutland [shadow rework] Signed-off-by: Daniel Axtens -- [I haven't tried to resolve the question of spurious faults. My understanding is that in order to se

[PATCH v9 2/5] kasan: add test for vmalloc

2019-10-16 Thread Daniel Axtens
Test kasan vmalloc support by adding a new test to the module. Signed-off-by: Daniel Axtens -- v5: split out per Christophe Leroy --- lib/test_kasan.c | 26 ++ 1 file changed, 26 insertions(+) diff --git lib/test_kasan.c lib/test_kasan.c index 49cc4d570a40

[PATCH v9 3/5] fork: support VMAP_STACK with KASAN_VMALLOC

2019-10-16 Thread Daniel Axtens
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward: - clear the shadow region of vmapped stacks when swapping them in - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN Reviewed-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- arch/Kconfig | 9 + kernel

[PATCH v9 4/5] x86/kasan: support KASAN_VMALLOC

2019-10-16 Thread Daniel Axtens
illed dynamically. Acked-by: Dmitry Vyukov Signed-off-by: Daniel Axtens --- v5: fix some checkpatch CHECK warnings. There are some that remain around lines ending with '(': I have not changed these because it's consistent with the rest of the file and it's not easy

[PATCH v9 5/5] kasan debug: track pages allocated for vmalloc shadow

2019-10-16 Thread Daniel Axtens
Provide the current number of vmalloc shadow pages in /sys/kernel/debug/kasan/vmalloc_shadow_pages. Signed-off-by: Daniel Axtens --- v8: rename kasan_vmalloc/shadow_pages -> kasan/vmalloc_shadow_pages On v4 (no dynamic freeing), I saw the following approximate figures on my test VM: - fr

[PATCH] powerpc: PPC_SECURE_BOOT should not require PowerNV

2020-09-23 Thread Daniel Axtens
s worth adding a new condition - at some stage we'll want to add a backend for pSeries anyway. Fixes: 61f879d97ce4 ("powerpc/pseries: Detect secure and trusted boot state of the system.") Cc: Nayna Jain Signed-off-by: Daniel Axtens --- arch/powerpc/Kconfig | 2 +- 1 file changed

Re: [PATCH] cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_reboot_notifier

2020-09-23 Thread Daniel Axtens
patch locally: $ scripts/checkpatch.pl -g HEAD -strict WARNING: Possible unwrapped commit description (prefer a maximum 75 chars per line) #15: make[3]: *** [./scripts/Makefile.build:316: drivers/cpufreq/powernv-cpufreq.o] Error 1 This is benign and you shouldn't wrap that line anyway.

Re: [PATCH v5 1/5] powerpc/sstep: Emulate prefixed instructions only when CPU_FTR_ARCH_31 is set

2020-10-11 Thread Daniel Axtens
Hi, Apologies if this has come up in a previous revision. > case 1: > + if (!cpu_has_feature(CPU_FTR_ARCH_31)) > + return -1; > + > prefix_r = GET_PREFIX_R(word); > ra = GET_PREFIX_RA(suffix); The comment above analyse_instr read

Re: [PATCH v5 1/5] powerpc/sstep: Emulate prefixed instructions only when CPU_FTR_ARCH_31 is set

2020-10-12 Thread Daniel Axtens
Ravi Bangoria writes: > Hi Daniel, > > On 10/12/20 7:21 AM, Daniel Axtens wrote: >> Hi, >> >> Apologies if this has come up in a previous revision. >> >> >>> case 1: >>> + if (!cpu_has_feature(CPU_FTR_ARCH_31)) >>

Re: [PATCH v5 1/5] powerpc/sstep: Emulate prefixed instructions only when CPU_FTR_ARCH_31 is set

2020-10-12 Thread Daniel Axtens
does, and looks good to me: Reviewed-by: Daniel Axtens Kind regards, Daniel > From: Balamuruhan S > > Unconditional emulation of prefixed instructions will allow > emulation of them on Power10 predecessors which might cause > issues. Restrict that. > > Fixes: 3920742b92f5 ("

Re: [PATCH v5 01/10] powerpc/uaccess: Add unsafe_copy_from_user

2021-02-04 Thread Daniel Axtens
Hi Chris, Pending anything that sparse reported (which I haven't checked), this looks ok to me. Reviewed-by: Daniel Axtens Kind regards, Daniel > Just wrap __copy_tofrom_user() for the usual 'unsafe' pattern which > takes in a label to goto on error. > > Signed-o

Re: [PATCH v5 02/10] powerpc/signal: Add unsafe_copy_{vsx, fpr}_from_user()

2021-02-04 Thread Daniel Axtens
not really something that this patch set should do. On that basis: Reviewed-by: Daniel Axtens Kind regards, Daniel > Reuse the "safe" implementation from signal.c except for calling > unsafe_copy_from_user() to copy into a local buffer. > > Signed-off-by: Christopher M. Rie

Re: [PATCH v5 03/10] powerpc/signal64: Move non-inline functions out of setup_sigcontext()

2021-02-07 Thread Daniel Axtens
Hi Chris, These two paragraphs are a little confusing and they seem slightly repetitive. But I get the general idea. Two specific comments below: > There are non-inline functions which get called in setup_sigcontext() to > save register state to the thread struct. Move these functions into a > se

Re: [PATCH v5 03/10] powerpc/signal64: Move non-inline functions out of setup_sigcontext()

2021-02-10 Thread Daniel Axtens
"Christopher M. Riedl" writes: > On Sun Feb 7, 2021 at 10:44 PM CST, Daniel Axtens wrote: >> Hi Chris, >> >> These two paragraphs are a little confusing and they seem slightly >> repetitive. But I get the general idea. Two specific comments below: > >

Re: [PATCH v5 04/10] powerpc: Reference param in MSR_TM_ACTIVE() macro

2021-02-11 Thread Daniel Axtens
sr; |^~ Having said that, this change seems like a good idea, squashing warnings at W=1 is still valuable. Reviewed-by: Daniel Axtens Kind regards, Daniel > Signed-off-by: Christopher M. Riedl > --- > arch/powerpc/include/asm/reg.h | 2 +- > 1 file changed

Re: [PATCH v5 05/10] powerpc/signal64: Remove TM ifdefery in middle of if/else block

2021-02-11 Thread Daniel Axtens
Hi Chris, > Rework the messy ifdef breaking up the if-else for TM similar to > commit f1cf4f93de2f ("powerpc/signal32: Remove ifdefery in middle of > if/else"). I'm not sure what 'the messy ifdef' and 'the if-else for TM' is (yet): perhaps you could start the commit message with a tiny bit of ba

Re: [PATCH v5 06/10] powerpc/signal64: Replace setup_sigcontext() w/ unsafe_setup_sigcontext()

2021-02-11 Thread Daniel Axtens
+ if (!user_write_access_begin(frame, sizeof(struct rt_sigframe))) > + return -EFAULT; Here you're opening a window for all of `frame`... > + err |= __unsafe_setup_sigcontext(&frame->uc.uc_mcontext, tsk, ... but here you're only passing in frame->uc.uc_mcontext for writing. ISTR that the underlying AMR switch is fully on / fully off so I don't think it really matters, but in this case should you be calling user_write_access_begin() with &frame->uc.uc_mcontext and the size of that? > + ksig->sig, NULL, > + (unsigned > long)ksig->ka.sa.sa_handler, 1); > + user_write_access_end(); > } > err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); > if (err) Apart from the size thing, everything looks good to me. I checked that all the things you've changed from safe to unsafe pass the same parameters, and they all looked good to me. With those caveats, Reviewed-by: Daniel Axtens Kind regards, Daniel

Re: [PATCH v5 07/10] powerpc/signal64: Replace restore_sigcontext() w/ unsafe_restore_sigcontext()

2021-02-12 Thread Daniel Axtens
Hi Chris, > Previously restore_sigcontext() performed a costly KUAP switch on every > uaccess operation. These repeated uaccess switches cause a significant > drop in signal handling performance. > > Rewrite restore_sigcontext() to assume that a userspace read access > window is open. Replace all

Re: [PATCH v5 10/10] powerpc/signal64: Use __get_user() to copy sigset_t

2021-02-12 Thread Daniel Axtens
ser(). Calling __get_user() also results in a small > boost to signal handling throughput here. Modulo the comments from Christophe, this looks good to me. It seems to do what it says on the tin. Reviewed-by: Daniel Axtens Do you know if this patch is responsible for the slightly increase in rad

Re: [PATCH 2/6] powerpc/pseries: Add key to flags in pSeries_lpar_hpte_updateboltedpp()

2021-02-15 Thread Daniel Axtens
Michael Ellerman writes: > The flags argument to plpar_pte_protect() (aka. H_PROTECT), includes > the key in bits 9-13, but currently we always set those bits to zero. > > In the past that hasn't been a problem because we always used key 0 > for the kernel, and updateboltedpp() is only used for k

Re: [PATCH 3/6] powerpc/64s: Use htab_convert_pte_flags() in hash__mark_rodata_ro()

2021-02-15 Thread Daniel Axtens
ot into a pp value, including the key. So far as I can tell by chasing the definitions around, this appears to do what it claims to do. So, for what it's worth: Reviewed-by: Daniel Axtens Kind regards, Daniel > > Fixes: d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kerne

Re: [PATCH 4/6] powerpc/mm/64s/hash: Factor out change_memory_range()

2021-02-18 Thread Daniel Axtens
Michael Ellerman writes: > Pull the loop calling hpte_updateboltedpp() out of > hash__change_memory_range() into a helper function. We need it to be a > separate function for the next patch. > > Signed-off-by: Michael Ellerman > --- > arch/powerpc/mm/book3s64/hash_pgtable.c | 23 +++

Re: [PATCH 5/6] powerpc/mm/64s/hash: Add real-mode change_memory_range() for hash LPAR

2021-02-18 Thread Daniel Axtens
Michael Ellerman writes: > When we enabled STRICT_KERNEL_RWX we received some reports of boot > failures when using the Hash MMU and running under phyp. The crashes > are intermittent, and often exhibit as a completely unresponsive > system, or possibly an oops. > > One example, which was caught

Re: [RFC PATCH 1/9] KVM: PPC: Book3S 64: move KVM interrupt entry to a common entry point

2021-02-18 Thread Daniel Axtens
> + beq kvmppc_interrupt_pr > +#endif > + b kvmppc_interrupt_hv > +#else > + b kvmppc_interrupt_pr > +#endif Apart from that I had a look and convinced myself that the code will behave the same as before. On that basis: Reviewed-by: Daniel Axtens Kind regards, Daniel

Re: [RFC PATCH 2/9] KVM: PPC: Book3S 64: Move GUEST_MODE_SKIP test into KVM

2021-02-18 Thread Daniel Axtens
Hi Nick, > +maybe_skip: > + cmpwi r12,0x200 > + beq 1f > + cmpwi r12,0x300 > + beq 1f > + cmpwi r12,0x380 > + beq 1f > +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE > + /* XXX: cbe stuff? instruction breakpoint? */ > + cmpwi r12,0xe02 > + beq 2f

Re: [PATCH] powerpc: use ARRAY_SIZE instead of division operation

2021-02-21 Thread Daniel Axtens
Hi Yang, > This eliminates the following coccicheck warning: > ./arch/powerpc/boot/mktree.c:130:31-32: WARNING: Use ARRAY_SIZE > > Reported-by: Abaci Robot > Signed-off-by: Yang Li > --- > arch/powerpc/boot/mktree.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/p

Re: [PATCH v2] selftests/powerpc: Fix L1D flushing tests for Power10

2021-02-23 Thread Daniel Axtens
st. - but, if the HW perf event changes again, the test will continue to work. This seems like the correct tradeoff to make. Having spent some quality time with these tests in the past: Acked-by: Daniel Axtens Kind regards, Daniel > > Signed-off-by: Russell Currey > --- &g

[RFC PATCH 0/8] WIP support for the LLVM integrated assembler

2021-02-24 Thread Daniel Axtens
iel feature section, none of the de-gas-ing changed the compiled binary for me under gcc-10.2.0-13ubuntu1. Daniel Axtens (8): powerpc/64s/exception: Clean up a missed SRR specifier powerpc: check for support for -Wa,-m{power4,any} powerpc/head-64: do less gas-specific stuff with sections p

[PATCH 1/8] powerpc/64s/exception: Clean up a missed SRR specifier

2021-02-24 Thread Daniel Axtens
Nick's patch cleaning up the SRR specifiers in exception-64s.S missed a single instance of EXC_HV_OR_STD. Clean that up. Caught by clang's integrated assembler. Fixes: 3f7fbd97d07d ("powerpc/64s/exception: Clean up SRR specifiers") Acked-by: Nicholas Piggin Signed-o

[RFC PATCH 2/8] powerpc: check for support for -Wa,-m{power4,any}

2021-02-24 Thread Daniel Axtens
LLVM's integrated assembler does not like either -Wa,-mpower4 or -Wa,-many. So just don't pass them if they're not supported. Signed-off-by: Daniel Axtens --- arch/powerpc/Makefile | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/Makefile

[RFC PATCH 3/8] powerpc/head-64: do less gas-specific stuff with sections

2021-02-24 Thread Daniel Axtens
Reopening the section without specifying the same flags breaks the llvm integrated assembler. Don't do it: just specify all the flags all the time. Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/head-64.h | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --

[RFC PATCH 4/8] powerpc/ppc_asm: use plain numbers for registers

2021-02-24 Thread Daniel Axtens
This is dumb but makes the llvm integrated assembler happy. https://github.com/ClangBuiltLinux/linux/issues/764 Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/ppc_asm.h | 64 +++--- 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/arch/powerpc

[RFC PATCH 5/8] poweprc/lib/quad: Provide macros for lq/stq

2021-02-24 Thread Daniel Axtens
For some reason the integrated assembler in clang-11 doesn't recognise them. Eventually we should fix it there too. Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/ppc-opcode.h | 4 arch/powerpc/lib/quad.S | 4 ++-- 2 files changed, 6 insertions(+), 2 dele

[RFC PATCH 6/8] powerpc/mm/book3s64/hash: drop pre 2.06 tlbiel for clang

2021-02-24 Thread Daniel Axtens
The llvm integrated assembler does not recognise the ISA 2.05 tlbiel version. Eventually do this more smartly. Signed-off-by: Daniel Axtens --- arch/powerpc/mm/book3s64/hash_native.c | 10 ++ 1 file changed, 10 insertions(+) diff --git a/arch/powerpc/mm/book3s64/hash_native.c b/arch

[RFC PATCH 7/8] powerpc/purgatory: drop .machine specifier

2021-02-24 Thread Daniel Axtens
It's ignored by future versions of llvm's integrated assembler (by not -11). I'm not sure what it does for us in gas. Signed-off-by: Daniel Axtens --- arch/powerpc/purgatory/trampoline_64.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/purgator

[RFC PATCH 8/8] powerpc/64/asm: don't reassign labels

2021-02-24 Thread Daniel Axtens
s about the addis, saying :0: error: expected relocatable expression I don't know what's special about p_end, because just above we do an ABS_ADDR(4f, text) and that seems to work just fine. Signed-off-by: Daniel Axtens --- arch/powerpc/include/asm/head-64.h | 12 +-- a

[PATCH] selftests/powerpc: Add uaccess flush test

2021-02-24 Thread Daniel Axtens
From: Thadeu Lima de Souza Cascardo Also based on the RFI and entry flush tests, it counts the L1D misses by doing a syscall that does user access: uname, in this case. Signed-off-by: Thadeu Lima de Souza Cascardo [dja: forward port, rename function] Signed-off-by: Daniel Axtens --- This

Re: [RFC PATCH 4/8] powerpc/ppc_asm: use plain numbers for registers

2021-02-25 Thread Daniel Axtens
Segher Boessenkool writes: > On Thu, Feb 25, 2021 at 02:10:02PM +1100, Daniel Axtens wrote: >> This is dumb but makes the llvm integrated assembler happy. >> https://github.com/ClangBuiltLinux/linux/issues/764 > >> -#define r0 %r0 > >> +#define r0

Re: [RFC PATCH 5/8] poweprc/lib/quad: Provide macros for lq/stq

2021-02-25 Thread Daniel Axtens
Segher Boessenkool writes: > On Thu, Feb 25, 2021 at 02:10:03PM +1100, Daniel Axtens wrote: >> +#define PPC_RAW_LQ(t, a, dq)(0xe000 | ___PPC_RT(t) | >> ___PPC_RA(a) | (((dq) & 0xfff) << 3)) > > Please keep the operand order the same as for th

Re: [RFC PATCH 7/8] powerpc/purgatory: drop .machine specifier

2021-02-25 Thread Daniel Axtens
Segher Boessenkool writes: > On Thu, Feb 25, 2021 at 02:10:05PM +1100, Daniel Axtens wrote: >> It's ignored by future versions of llvm's integrated assembler (by not -11). >> I'm not sure what it does for us in gas. > > It enables all insns that exist

Re: [RFC PATCH 8/8] powerpc/64/asm: don't reassign labels

2021-02-25 Thread Daniel Axtens
Segher Boessenkool writes: > On Thu, Feb 25, 2021 at 02:10:06PM +1100, Daniel Axtens wrote: >> The assembler really does not like us reassigning things to the same >> label: >> >> :7:9: error: invalid reassignment of non-absolute variable >> 'fs_label&

Re: [PATCH v2 01/37] KVM: PPC: Book3S 64: remove unused kvmppc_h_protect argument

2021-02-25 Thread Daniel Axtens
unsigned long pte_index, unsigned long avpn);$ Once that is resolved, Reviewed-by: Daniel Axtens Kind regards, Daniel Axtens > Signed-off-by: Nicholas Piggin > --- > arch/powerpc/include/asm/kvm_ppc.h | 3 +-- > arch/powerpc/kvm/book3s_hv_rm_mmu.c | 3 +-- > 2 files cha

Re: [PATCH v2 02/37] KVM: PPC: Book3S HV: Fix CONFIG_SPAPR_TCE_IOMMU=n default hcalls

2021-02-25 Thread Daniel Axtens
e warning. This seems like the right solution to me. Reviewed-by: Daniel Axtens Kind regards, Daniel > > Signed-off-by: Nicholas Piggin > --- > arch/powerpc/kvm/book3s_hv.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/p

Re: [PATCH v2 04/37] powerpc/64s: remove KVM SKIP test from instruction breakpoint handler

2021-02-25 Thread Daniel Axtens
can follow your logic for removing the IKVM_SKIP handler from the instruction breakpoint exception. On that basis: Reviewed-by: Daniel Axtens Kind regards, Daniel > > Reviewed-by: Fabiano Rosas > Signed-off-by: Nicholas Piggin > --- > arch/powerpc/kernel/exceptions-64s.S |

Re: [PATCH v2 05/37] KVM: PPC: Book3S HV: Ensure MSR[ME] is always set in guest MSR

2021-02-25 Thread Daniel Axtens
that the transactional state is sane so this is another sanity-enforcement in the same sort of vein. All up: Reviewed-by: Daniel Axtens Kind regards, Daniel > + > /* >* Check for illegal transactional state bit combination >* and if we find it, force the TS field to a safe state. > -- > 2.23.0

Re: [PATCH v1 01/15] powerpc/uaccess: Remove __get_user_allowed() and unsafe_op_wrap()

2021-03-01 Thread Daniel Axtens
Christophe Leroy writes: > Those two macros have only one user which is unsafe_get_user(). > > Put everything in one place and remove them. > > Signed-off-by: Christophe Leroy > --- > arch/powerpc/include/asm/uaccess.h | 10 +- > 1 file changed, 5 insertions(+), 5 deletions(-) > > di

Re: [PATCH v1 02/15] powerpc/uaccess: Define ___get_user_instr() for ppc32

2021-03-01 Thread Daniel Axtens
ocumenting what I considered while reviewing your patch.) As such: Reviewed-by: Daniel Axtens Kind regards, Daniel > - > extern long __put_user_bad(void); > > #define __put_user_size(x, ptr, size, retval)\ > -- > 2.25.0

Re: [PATCH v1 03/15] powerpc/uaccess: Remove __get/put_user_inatomic()

2021-03-01 Thread Daniel Axtens
Christophe Leroy writes: > Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with > pagefault_disable()"), __get/put_user() can be used in atomic parts > of the code, therefore the __get/put_user_inatomic() introduced > by commit e68c825bb016 ("[POWERPC] Add inatomic versions of __ge

Re: [PATCH v2 05/37] KVM: PPC: Book3S HV: Ensure MSR[ME] is always set in guest MSR

2021-03-04 Thread Daniel Axtens
Daniel > >> >> The patch seems sane to me, I agree that we don't want guests running with >> MSR_ME=0 and kvmppc_set_msr_hv already ensures that the transactional state >> is >> sane so this is another sanity-enforcement in the same sort of vein. >> >> All up: >> Reviewed-by: Daniel Axtens > > Thanks, > Nick

Re: [PATCH v2 06/37] KVM: PPC: Book3S 64: move KVM interrupt entry to a common entry point

2021-03-04 Thread Daniel Axtens
s I checked this against the RFC which I was happy with and there are no changes that I am concerned about. The expanded comment is also nice. As such: Reviewed-by: Daniel Axtens Kind regards, Daniel > Signed-off-by: Nicholas Piggin > --- > arch/powerpc/kernel/exceptions-64s.S

Re: [PATCH v2 07/37] KVM: PPC: Book3S 64: Move GUEST_MODE_SKIP test into KVM

2021-03-04 Thread Daniel Axtens
4s.S and indeed the GEN_KVM macro will add 0x2 to the IVEC if IHSRR is set, and it is set for h_data_storage. So this checks out to me. I have checked, to the best of my limited assembler capabilities that the logic before and after matches. It seems good to me. On that limited basis, Reviewed-by: Da

Re: [PATCH v2 01/15] powerpc/uaccess: Remove __get_user_allowed() and unsafe_op_wrap()

2021-03-10 Thread Daniel Axtens
Hi Christophe, Thanks for the answers to my questions on v1. This all looks good to me. Reviewed-by: Daniel Axtens Kind regards, Daniel > Those two macros have only one user which is unsafe_get_user(). > > Put everything in one place and remove them. > > Signed-off-by: C

Re: [PATCH v2 03/15] powerpc/align: Convert emulate_spe() to user_access_begin

2021-03-10 Thread Daniel Axtens
t_read: Checkpatch complains that this is CamelCase, which seems like a checkpatch problem. Efault_{read,write} seem like good labels to me. (You don't need to change anything, I just like to check the checkpatch results when reviewing a patch.) > + user_read_access_end(); > + return -EFAULT; > + > +Efault_write: > + user_write_access_end(); > + return -EFAULT; > } > #endif /* CONFIG_SPE */ > With the user_write_access_begin change: Reviewed-by: Daniel Axtens Kind regards, Daniel

Re: [PATCH v2 04/15] powerpc/uaccess: Remove __get/put_user_inatomic()

2021-03-10 Thread Daniel Axtens
can be used in atomic parts > of the code, therefore __get/put_user_inatomic() have become useless. > > Remove __get_user_inatomic() and __put_user_inatomic(). > This makes much more sense, thank you! Simplifying uaccess.h is always good to me :) Reviewed-by: Daniel Axtens Kind

Re: [PATCH v3 03/41] KVM: PPC: Book3S HV: Remove redundant mtspr PSPB

2021-03-11 Thread Daniel Axtens
entry/exit path on P9 for radix guests"), and there wasn't any justification for having a double mtspr. So, this seems good: Reviewed-by: Daniel Axtens Kind regards, Daniel > Suggested-by: Fabiano Rosas > Signed-off-by: Nicholas Piggin > --- > arch/powerpc/kvm/book3s_hv.c

Re: [PATCH v3 12/41] KVM: PPC: Book3S 64: Move hcall early register setup to KVM

2021-03-11 Thread Daniel Axtens
Nicholas Piggin writes: > System calls / hcalls have a different calling convention than > other interrupts, so there is code in the KVMTEST to massage these > into the same form as other interrupt handlers. > > Move this work into the KVM hcall handler. This means teaching KVM > a little more ab

<    1   2   3   4   5   6   7   8   9   10   >