Christophe Leroy writes:
> Le 19/08/2019 à 08:28, Daniel Axtens a écrit :
>> In KASAN development I noticed that the powerpc-specific bitops
>> were not being picked up by the KASAN test suite.
>
> I'm not sure anybody cares about who noticed the problem. This sentenc
exists to support sparse, squashing a bunch of sparse warnings.
>From the commit where I introduced it:
commit 42f5b4cacd783faf05e3ff8bf85e8be31f3dfa9d
Author: Daniel Axtens
Date: Wed May 18 11:16:50 2016 +1000
powerpc: Introduce asm-prototypes.h
Sparse picked up a number of function
Hi Chris,
> Read-only mode should not prevent listing and clearing any active
> breakpoints.
I tested this and it works for me:
Tested-by: Daniel Axtens
> + if (xmon_is_ro || !scanhex(&a)) {
It took me a while to figure out what this line does: as I understand
it, th
Hi,
> Xmon should be either fully or partially disabled depending on the
> kernel lockdown state.
I've been kicking the tyres of this, and it seems to work well:
Tested-by: Daniel Axtens
>
> Put xmon into read-only mode for lockdown=integrity and prevent user
> entry int
he vmap book-keeping
- Split out the test into a separate patch
- Optional patch to track the number of pages allocated
- minor checkpatch cleanups
Daniel Axtens (5):
kasan: support backing vmalloc space with real shadow memory
kasan: add test for vmalloc
fork: support VMAP_STACK with
.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Acked-by: Vasily Gorbik
Signed-off-by: Daniel Axtens
[Mark: rework shadow allocation]
Signed-off-by: Mark Rutland
--
v2: let kasan_unpoison_shadow deal with ranges that do not use a
full shadow byte.
v3: relax module alignment
Test kasan vmalloc support by adding a new test to the module.
Signed-off-by: Daniel Axtens
--
v5: split out per Christophe Leroy
---
lib/test_kasan.c | 26 ++
1 file changed, 26 insertions(+)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 49cc4d570a40
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:
- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN
Reviewed-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
arch/Kconfig | 9 +
kernel
illed dynamically.
Acked-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
v5: fix some checkpatch CHECK warnings. There are some that remain
around lines ending with '(': I have not changed these because
it's consistent with the rest of the file and it's not easy
Provide the current number of vmalloc shadow pages in
/sys/kernel/debug/kasan_vmalloc/shadow_pages.
Signed-off-by: Daniel Axtens
---
Merging this is probably overkill, but I leave it to the discretion
of the broader community.
On v4 (no dynamic freeing), I saw the following approximate
Daniel Axtens writes:
> Currently bitops-instrumented.h assumes that the architecture provides
> atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
> This is true on x86 and s390, but is not always true: there is a
> generic bitops/non-atomic.h header that prov
Hi all,
> +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> + void *unused)
> +{
> + unsigned long page;
> +
> + page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT);
> +
> + spin_lock(&init_mm.page_table_lock);
> +
> +
it messages and docs
- Dynamically free unused shadow pages by hooking into the vmap book-keeping
- Split out the test into a separate patch
- Optional patch to track the number of pages allocated
- minor checkpatch cleanups
v6: Properly guard freeing pages in patch 1, drop debugging code.
Daniel
.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Acked-by: Vasily Gorbik
Signed-off-by: Daniel Axtens
[Mark: rework shadow allocation]
Signed-off-by: Mark Rutland
--
v2: let kasan_unpoison_shadow deal with ranges that do not use a
full shadow byte.
v3: relax module alignment
Test kasan vmalloc support by adding a new test to the module.
Signed-off-by: Daniel Axtens
--
v5: split out per Christophe Leroy
---
lib/test_kasan.c | 26 ++
1 file changed, 26 insertions(+)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 49cc4d570a40
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:
- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN
Reviewed-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
arch/Kconfig | 9 +
kernel
illed dynamically.
Acked-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
v5: fix some checkpatch CHECK warnings. There are some that remain
around lines ending with '(': I have not changed these because
it's consistent with the rest of the file and it's not easy
Provide the current number of vmalloc shadow pages in
/sys/kernel/debug/kasan_vmalloc/shadow_pages.
Signed-off-by: Daniel Axtens
---
Merging this is probably overkill, but I leave it to the discretion
of the broader community.
On v4 (no dynamic freeing), I saw the following approximate
Hi Mark,
>> +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
>> +void *unused)
>> +{
>> +unsigned long page;
>> +
>> +page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT);
>> +
>> +spin_lock(&init_mm.page_table_lock);
>>
net/
Properly guard freeing pages in patch 1, drop debugging code.
v7: Add a TLB flush on freeing, thanks Mark Rutland.
Explain more clearly how I think freeing is concurrency-safe.
Daniel Axtens (5):
kasan: support backing vmalloc space with real shadow memory
kasan: add test for vma
.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Acked-by: Vasily Gorbik
Signed-off-by: Daniel Axtens
[Mark: rework shadow allocation]
Signed-off-by: Mark Rutland
--
v2: let kasan_unpoison_shadow deal with ranges that do not use a
full shadow byte.
v3: relax module alignment
Test kasan vmalloc support by adding a new test to the module.
Signed-off-by: Daniel Axtens
--
v5: split out per Christophe Leroy
---
lib/test_kasan.c | 26 ++
1 file changed, 26 insertions(+)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 49cc4d570a40
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:
- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN
Reviewed-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
arch/Kconfig | 9 +
kernel
illed dynamically.
Acked-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
v5: fix some checkpatch CHECK warnings. There are some that remain
around lines ending with '(': I have not changed these because
it's consistent with the rest of the file and it's not easy
Provide the current number of vmalloc shadow pages in
/sys/kernel/debug/kasan_vmalloc/shadow_pages.
Signed-off-by: Daniel Axtens
---
Merging this is probably overkill, but I leave it to the discretion
of the broader community.
On v4 (no dynamic freeing), I saw the following approximate
Andrey Konovalov writes:
> On Tue, Sep 3, 2019 at 4:56 PM Daniel Axtens wrote:
>>
>> Provide the current number of vmalloc shadow pages in
>> /sys/kernel/debug/kasan_vmalloc/shadow_pages.
>
> Maybe it makes sense to put this into /sys/kernel/debug/kasan/
> (
Hi Christophe,
> Are any other patches required prior to this series ? I have tried to
> apply it on later powerpc/merge branch without success:
It applies on the latest linux-next. I didn't base it on powerpc/*
because it's generic.
Regards,
Daniel
Hi,
So Matthew Garrett and I talked about this at Linux Plumbers. Matthew,
if I understood correctly, your concern was that this doesn't sit well
with the existing threat model for lockdown. As I understand it, the
idea is that if you're able to get access to the physical console,
you're already a
hadow_pages to kasan/vmalloc_shadow_pages
Daniel Axtens (5):
kasan: support backing vmalloc space with real shadow memory
kasan: add test for vmalloc
fork: support VMAP_STACK with KASAN_VMALLOC
x86/kasan: support KASAN_VMALLOC
kasan debug: track pages allocated for vmalloc shadow
Documentation/dev-t
.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Acked-by: Vasily Gorbik
Signed-off-by: Daniel Axtens
[Mark: rework shadow allocation]
Signed-off-by: Mark Rutland
--
v2: let kasan_unpoison_shadow deal with ranges that do not use a
full shadow byte.
v3: relax module alignment
Test kasan vmalloc support by adding a new test to the module.
Signed-off-by: Daniel Axtens
--
v5: split out per Christophe Leroy
---
lib/test_kasan.c | 26 ++
1 file changed, 26 insertions(+)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 49cc4d570a40
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:
- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN
Reviewed-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
arch/Kconfig | 9 +
kernel
illed dynamically.
Acked-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
v5: fix some checkpatch CHECK warnings. There are some that remain
around lines ending with '(': I have not changed these because
it's consistent with the rest of the file and it's not easy
Provide the current number of vmalloc shadow pages in
/sys/kernel/debug/kasan/vmalloc_shadow_pages.
Signed-off-by: Daniel Axtens
---
v8: rename kasan_vmalloc/shadow_pages -> kasan/vmalloc_shadow_pages
On v4 (no dynamic freeing), I saw the following approximate figures
on my test VM:
- fr
Hi,
>> /*
>> * Find a place in the tree where VA potentially will be
>> * inserted, unless it is merged with its sibling/siblings.
>> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
>> if (sibling->va_end == va->va_start) {
>> si
Russell Currey writes:
> Very rudimentary, just
>
> echo 1 > [debugfs]/check_wx_pages
>
> and check the kernel log. Useful for testing strict module RWX.
I was very confused that this requires the boot-time testing to be
enabled to appear in debugfs. Could you change the kconfig snippet f
Hi Christophe,
As well as the checkpatch warnings noted on Patchwork
(https://patchwork.ozlabs.org/patch/1173804/ and
https://patchwork.ozlabs.org/patch/1173805/), I noticed:
Applying: powerpc/powernv: ocxl move SPA definition
.git/rebase-apply/patch:405: new blank line at EOF.
Hi Russell,
Tested-by: Daniel Axtens # e6500
Because ptdump isn't quite working on book3e 64bit atm, I hacked it up
to print the raw PTE and the extracted flags. After loading a module, I
see the supervisor write bit set without module RWX, and it cleared with
module RWX. Modules still se
Hi Uladzislau,
> Looking at it one more, i think above part of code is a bit wrong
> and should be separated from merge_or_add_vmap_area() logic. The
> reason is to keep it simple and do only what it is supposed to do:
> merging or adding.
>
> Also the kasan_release_vmalloc() gets called twice th
Hi Andrey,
>> +/*
>> + * Ensure poisoning is visible before the shadow is made visible
>> + * to other CPUs.
>> + */
>> +smp_wmb();
>
> I'm not quite understand what this barrier do and why it needed.
> And if it's really needed there should be a pairing barrier
> on the other
Mark Rutland writes:
> On Tue, Oct 01, 2019 at 04:58:30PM +1000, Daniel Axtens wrote:
>> Hook into vmalloc and vmap, and dynamically allocate real shadow
>> memory to back the mappings.
>>
>> Most mappings in vmalloc space are small, requiring less than a f
>>> @@ -2497,6 +2533,9 @@ void *__vmalloc_node_range(unsigned long size,
>>> unsigned long align,
>>> if (!addr)
>>> return NULL;
>>>
>>> + if (kasan_populate_vmalloc(real_size, area))
>>> + return NULL;
>>> +
>>
>> KASAN itself uses __vmalloc_node_range() to allocate
> There is a potential problem here, as Will Deacon wrote up at:
>
>
> https://lore.kernel.org/linux-arm-kernel/20190827131818.14724-1-w...@kernel.org/
>
> ... in the section starting:
>
> | *** Other architecture maintainers -- start here! ***
>
> ... whereby the CPU can spuriously fault on a
0191001065834.8880-1-...@axtens.net/
rename kasan_vmalloc/shadow_pages to kasan/vmalloc_shadow_pages
v9: address a number of review comments for patch 1.
Daniel Axtens (5):
kasan: support backing vmalloc space with real shadow memory
kasan: add test for vmalloc
fork: support VMAP_STACK with
.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Acked-by: Vasily Gorbik
Co-developed-by: Mark Rutland
Signed-off-by: Mark Rutland [shadow rework]
Signed-off-by: Daniel Axtens
--
[I haven't tried to resolve the question of spurious faults. My
understanding is that in order to se
Test kasan vmalloc support by adding a new test to the module.
Signed-off-by: Daniel Axtens
--
v5: split out per Christophe Leroy
---
lib/test_kasan.c | 26 ++
1 file changed, 26 insertions(+)
diff --git lib/test_kasan.c lib/test_kasan.c
index 49cc4d570a40
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:
- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN
Reviewed-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
arch/Kconfig | 9 +
kernel
illed dynamically.
Acked-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
v5: fix some checkpatch CHECK warnings. There are some that remain
around lines ending with '(': I have not changed these because
it's consistent with the rest of the file and it's not easy
Provide the current number of vmalloc shadow pages in
/sys/kernel/debug/kasan/vmalloc_shadow_pages.
Signed-off-by: Daniel Axtens
---
v8: rename kasan_vmalloc/shadow_pages -> kasan/vmalloc_shadow_pages
On v4 (no dynamic freeing), I saw the following approximate figures
on my test VM:
- fr
s worth adding a new condition -
at some stage we'll want to add a backend for pSeries anyway.
Fixes: 61f879d97ce4 ("powerpc/pseries: Detect secure and trusted boot state of
the system.")
Cc: Nayna Jain
Signed-off-by: Daniel Axtens
---
arch/powerpc/Kconfig | 2 +-
1 file changed
patch locally:
$ scripts/checkpatch.pl -g HEAD -strict
WARNING: Possible unwrapped commit description (prefer a maximum 75 chars per
line)
#15:
make[3]: *** [./scripts/Makefile.build:316: drivers/cpufreq/powernv-cpufreq.o]
Error 1
This is benign and you shouldn't wrap that line anyway.
Hi,
Apologies if this has come up in a previous revision.
> case 1:
> + if (!cpu_has_feature(CPU_FTR_ARCH_31))
> + return -1;
> +
> prefix_r = GET_PREFIX_R(word);
> ra = GET_PREFIX_RA(suffix);
The comment above analyse_instr read
Ravi Bangoria writes:
> Hi Daniel,
>
> On 10/12/20 7:21 AM, Daniel Axtens wrote:
>> Hi,
>>
>> Apologies if this has come up in a previous revision.
>>
>>
>>> case 1:
>>> + if (!cpu_has_feature(CPU_FTR_ARCH_31))
>>
does, and looks good
to me:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> From: Balamuruhan S
>
> Unconditional emulation of prefixed instructions will allow
> emulation of them on Power10 predecessors which might cause
> issues. Restrict that.
>
> Fixes: 3920742b92f5 ("
Hi Chris,
Pending anything that sparse reported (which I haven't checked), this
looks ok to me.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> Just wrap __copy_tofrom_user() for the usual 'unsafe' pattern which
> takes in a label to goto on error.
>
> Signed-o
not
really something that this patch set should do.
On that basis:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> Reuse the "safe" implementation from signal.c except for calling
> unsafe_copy_from_user() to copy into a local buffer.
>
> Signed-off-by: Christopher M. Rie
Hi Chris,
These two paragraphs are a little confusing and they seem slightly
repetitive. But I get the general idea. Two specific comments below:
> There are non-inline functions which get called in setup_sigcontext() to
> save register state to the thread struct. Move these functions into a
> se
"Christopher M. Riedl" writes:
> On Sun Feb 7, 2021 at 10:44 PM CST, Daniel Axtens wrote:
>> Hi Chris,
>>
>> These two paragraphs are a little confusing and they seem slightly
>> repetitive. But I get the general idea. Two specific comments below:
>
>
sr;
|^~
Having said that, this change seems like a good idea, squashing warnings
at W=1 is still valuable.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> Signed-off-by: Christopher M. Riedl
> ---
> arch/powerpc/include/asm/reg.h | 2 +-
> 1 file changed
Hi Chris,
> Rework the messy ifdef breaking up the if-else for TM similar to
> commit f1cf4f93de2f ("powerpc/signal32: Remove ifdefery in middle of
> if/else").
I'm not sure what 'the messy ifdef' and 'the if-else for TM' is (yet):
perhaps you could start the commit message with a tiny bit of
ba
+ if (!user_write_access_begin(frame, sizeof(struct rt_sigframe)))
> + return -EFAULT;
Here you're opening a window for all of `frame`...
> + err |= __unsafe_setup_sigcontext(&frame->uc.uc_mcontext, tsk,
... but here you're only passing in frame->uc.uc_mcontext for writing.
ISTR that the underlying AMR switch is fully on / fully off so I don't
think it really matters, but in this case should you be calling
user_write_access_begin() with &frame->uc.uc_mcontext and the size of
that?
> + ksig->sig, NULL,
> + (unsigned
> long)ksig->ka.sa.sa_handler, 1);
> + user_write_access_end();
> }
> err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
> if (err)
Apart from the size thing, everything looks good to me. I checked that
all the things you've changed from safe to unsafe pass the same
parameters, and they all looked good to me.
With those caveats,
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
Hi Chris,
> Previously restore_sigcontext() performed a costly KUAP switch on every
> uaccess operation. These repeated uaccess switches cause a significant
> drop in signal handling performance.
>
> Rewrite restore_sigcontext() to assume that a userspace read access
> window is open. Replace all
ser(). Calling __get_user() also results in a small
> boost to signal handling throughput here.
Modulo the comments from Christophe, this looks good to me. It seems to
do what it says on the tin.
Reviewed-by: Daniel Axtens
Do you know if this patch is responsible for the slightly increase in
rad
Michael Ellerman writes:
> The flags argument to plpar_pte_protect() (aka. H_PROTECT), includes
> the key in bits 9-13, but currently we always set those bits to zero.
>
> In the past that hasn't been a problem because we always used key 0
> for the kernel, and updateboltedpp() is only used for k
ot into a pp value, including the key.
So far as I can tell by chasing the definitions around, this appears
to do what it claims to do.
So, for what it's worth:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
>
> Fixes: d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kerne
Michael Ellerman writes:
> Pull the loop calling hpte_updateboltedpp() out of
> hash__change_memory_range() into a helper function. We need it to be a
> separate function for the next patch.
>
> Signed-off-by: Michael Ellerman
> ---
> arch/powerpc/mm/book3s64/hash_pgtable.c | 23 +++
Michael Ellerman writes:
> When we enabled STRICT_KERNEL_RWX we received some reports of boot
> failures when using the Hash MMU and running under phyp. The crashes
> are intermittent, and often exhibit as a completely unresponsive
> system, or possibly an oops.
>
> One example, which was caught
> + beq kvmppc_interrupt_pr
> +#endif
> + b kvmppc_interrupt_hv
> +#else
> + b kvmppc_interrupt_pr
> +#endif
Apart from that I had a look and convinced myself that the code will
behave the same as before. On that basis:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
Hi Nick,
> +maybe_skip:
> + cmpwi r12,0x200
> + beq 1f
> + cmpwi r12,0x300
> + beq 1f
> + cmpwi r12,0x380
> + beq 1f
> +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
> + /* XXX: cbe stuff? instruction breakpoint? */
> + cmpwi r12,0xe02
> + beq 2f
Hi Yang,
> This eliminates the following coccicheck warning:
> ./arch/powerpc/boot/mktree.c:130:31-32: WARNING: Use ARRAY_SIZE
>
> Reported-by: Abaci Robot
> Signed-off-by: Yang Li
> ---
> arch/powerpc/boot/mktree.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/p
st.
- but, if the HW perf event changes again, the test will continue to
work.
This seems like the correct tradeoff to make.
Having spent some quality time with these tests in the past:
Acked-by: Daniel Axtens
Kind regards,
Daniel
>
> Signed-off-by: Russell Currey
> ---
&g
iel feature
section, none of the de-gas-ing changed the compiled binary for me
under gcc-10.2.0-13ubuntu1.
Daniel Axtens (8):
powerpc/64s/exception: Clean up a missed SRR specifier
powerpc: check for support for -Wa,-m{power4,any}
powerpc/head-64: do less gas-specific stuff with sections
p
Nick's patch cleaning up the SRR specifiers in exception-64s.S
missed a single instance of EXC_HV_OR_STD. Clean that up.
Caught by clang's integrated assembler.
Fixes: 3f7fbd97d07d ("powerpc/64s/exception: Clean up SRR specifiers")
Acked-by: Nicholas Piggin
Signed-o
LLVM's integrated assembler does not like either -Wa,-mpower4
or -Wa,-many. So just don't pass them if they're not supported.
Signed-off-by: Daniel Axtens
---
arch/powerpc/Makefile | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/Makefile
Reopening the section without specifying the same flags breaks
the llvm integrated assembler. Don't do it: just specify all the
flags all the time.
Signed-off-by: Daniel Axtens
---
arch/powerpc/include/asm/head-64.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --
This is dumb but makes the llvm integrated assembler happy.
https://github.com/ClangBuiltLinux/linux/issues/764
Signed-off-by: Daniel Axtens
---
arch/powerpc/include/asm/ppc_asm.h | 64 +++---
1 file changed, 32 insertions(+), 32 deletions(-)
diff --git a/arch/powerpc
For some reason the integrated assembler in clang-11 doesn't recognise
them. Eventually we should fix it there too.
Signed-off-by: Daniel Axtens
---
arch/powerpc/include/asm/ppc-opcode.h | 4
arch/powerpc/lib/quad.S | 4 ++--
2 files changed, 6 insertions(+), 2 dele
The llvm integrated assembler does not recognise the ISA 2.05 tlbiel
version. Eventually do this more smartly.
Signed-off-by: Daniel Axtens
---
arch/powerpc/mm/book3s64/hash_native.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/arch/powerpc/mm/book3s64/hash_native.c
b/arch
It's ignored by future versions of llvm's integrated assembler (by not -11).
I'm not sure what it does for us in gas.
Signed-off-by: Daniel Axtens
---
arch/powerpc/purgatory/trampoline_64.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/purgator
s about the addis,
saying
:0: error: expected relocatable expression
I don't know what's special about p_end, because just above we do an
ABS_ADDR(4f, text) and that seems to work just fine.
Signed-off-by: Daniel Axtens
---
arch/powerpc/include/asm/head-64.h | 12 +--
a
From: Thadeu Lima de Souza Cascardo
Also based on the RFI and entry flush tests, it counts the L1D misses
by doing a syscall that does user access: uname, in this case.
Signed-off-by: Thadeu Lima de Souza Cascardo
[dja: forward port, rename function]
Signed-off-by: Daniel Axtens
---
This
Segher Boessenkool writes:
> On Thu, Feb 25, 2021 at 02:10:02PM +1100, Daniel Axtens wrote:
>> This is dumb but makes the llvm integrated assembler happy.
>> https://github.com/ClangBuiltLinux/linux/issues/764
>
>> -#define r0 %r0
>
>> +#define r0
Segher Boessenkool writes:
> On Thu, Feb 25, 2021 at 02:10:03PM +1100, Daniel Axtens wrote:
>> +#define PPC_RAW_LQ(t, a, dq)(0xe000 | ___PPC_RT(t) |
>> ___PPC_RA(a) | (((dq) & 0xfff) << 3))
>
> Please keep the operand order the same as for th
Segher Boessenkool writes:
> On Thu, Feb 25, 2021 at 02:10:05PM +1100, Daniel Axtens wrote:
>> It's ignored by future versions of llvm's integrated assembler (by not -11).
>> I'm not sure what it does for us in gas.
>
> It enables all insns that exist
Segher Boessenkool writes:
> On Thu, Feb 25, 2021 at 02:10:06PM +1100, Daniel Axtens wrote:
>> The assembler really does not like us reassigning things to the same
>> label:
>>
>> :7:9: error: invalid reassignment of non-absolute variable
>> 'fs_label&
unsigned long pte_index, unsigned long avpn);$
Once that is resolved,
Reviewed-by: Daniel Axtens
Kind regards,
Daniel Axtens
> Signed-off-by: Nicholas Piggin
> ---
> arch/powerpc/include/asm/kvm_ppc.h | 3 +--
> arch/powerpc/kvm/book3s_hv_rm_mmu.c | 3 +--
> 2 files cha
e
warning.
This seems like the right solution to me.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
>
> Signed-off-by: Nicholas Piggin
> ---
> arch/powerpc/kvm/book3s_hv.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/p
can follow your logic for removing the IKVM_SKIP handler from the
instruction breakpoint exception.
On that basis:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
>
> Reviewed-by: Fabiano Rosas
> Signed-off-by: Nicholas Piggin
> ---
> arch/powerpc/kernel/exceptions-64s.S |
that the transactional state is
sane so this is another sanity-enforcement in the same sort of vein.
All up:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> +
> /*
>* Check for illegal transactional state bit combination
>* and if we find it, force the TS field to a safe state.
> --
> 2.23.0
Christophe Leroy writes:
> Those two macros have only one user which is unsafe_get_user().
>
> Put everything in one place and remove them.
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/include/asm/uaccess.h | 10 +-
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> di
ocumenting what I
considered while reviewing your patch.)
As such:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> -
> extern long __put_user_bad(void);
>
> #define __put_user_size(x, ptr, size, retval)\
> --
> 2.25.0
Christophe Leroy writes:
> Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
> pagefault_disable()"), __get/put_user() can be used in atomic parts
> of the code, therefore the __get/put_user_inatomic() introduced
> by commit e68c825bb016 ("[POWERPC] Add inatomic versions of __ge
Daniel
>
>>
>> The patch seems sane to me, I agree that we don't want guests running with
>> MSR_ME=0 and kvmppc_set_msr_hv already ensures that the transactional state
>> is
>> sane so this is another sanity-enforcement in the same sort of vein.
>>
>> All up:
>> Reviewed-by: Daniel Axtens
>
> Thanks,
> Nick
s
I checked this against the RFC which I was happy with and there are no
changes that I am concerned about. The expanded comment is also nice.
As such:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> Signed-off-by: Nicholas Piggin
> ---
> arch/powerpc/kernel/exceptions-64s.S
4s.S and indeed the GEN_KVM macro will add 0x2 to the IVEC
if IHSRR is set, and it is set for h_data_storage.
So this checks out to me.
I have checked, to the best of my limited assembler capabilities that
the logic before and after matches. It seems good to me.
On that limited basis,
Reviewed-by: Da
Hi Christophe,
Thanks for the answers to my questions on v1.
This all looks good to me.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> Those two macros have only one user which is unsafe_get_user().
>
> Put everything in one place and remove them.
>
> Signed-off-by: C
t_read:
Checkpatch complains that this is CamelCase, which seems like a
checkpatch problem. Efault_{read,write} seem like good labels to me.
(You don't need to change anything, I just like to check the checkpatch
results when reviewing a patch.)
> + user_read_access_end();
> + return -EFAULT;
> +
> +Efault_write:
> + user_write_access_end();
> + return -EFAULT;
> }
> #endif /* CONFIG_SPE */
>
With the user_write_access_begin change:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
can be used in atomic parts
> of the code, therefore __get/put_user_inatomic() have become useless.
>
> Remove __get_user_inatomic() and __put_user_inatomic().
>
This makes much more sense, thank you!
Simplifying uaccess.h is always good to me :)
Reviewed-by: Daniel Axtens
Kind
entry/exit path on P9 for radix
guests"), and there wasn't any justification for having a double mtspr.
So, this seems good:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> Suggested-by: Fabiano Rosas
> Signed-off-by: Nicholas Piggin
> ---
> arch/powerpc/kvm/book3s_hv.c
Nicholas Piggin writes:
> System calls / hcalls have a different calling convention than
> other interrupts, so there is code in the KVMTEST to massage these
> into the same form as other interrupt handlers.
>
> Move this work into the KVM hcall handler. This means teaching KVM
> a little more ab
501 - 600 of 941 matches
Mail list logo