From: Ravi Bangoria <ravi.bango...@linux.ibm.com>

On PPC64 with KUAP enabled, any kernel code which wants to
access userspace needs to be surrounded by disable-enable KUAP.
But that is not happening for BPF_PROBE_MEM load instruction.
So, when BPF program tries to access invalid userspace address,
page-fault handler considers it as bad KUAP fault:

  Kernel attempted to read user page (d0000000) - exploit attempt? (uid: 0)

Considering the fact that PTR_TO_BTF_ID (which uses BPF_PROBE_MEM
mode) could either be a valid kernel pointer or NULL but should
never be a pointer to userspace address, execute BPF_PROBE_MEM load
only if addr is kernel address, otherwise set dst_reg=0 and move on.

This will catch NULL, valid or invalid userspace pointers. Only bad
kernel pointer will be handled by BPF exception table.

[Alexei suggested for x86]
Suggested-by: Alexei Starovoitov <a...@kernel.org>
Signed-off-by: Ravi Bangoria <ravi.bango...@linux.ibm.com>
Signed-off-by: Hari Bathini <hbath...@linux.ibm.com>
Reviewed-by: Christophe Leroy <christophe.le...@csgroup.eu>
---

Changes in v4:
* Used IS_ENABLED() instead of #ifdef.
* Dropped the else case that is not applicable for PPC64.


 arch/powerpc/net/bpf_jit_comp64.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
b/arch/powerpc/net/bpf_jit_comp64.c
index ede8cb3e453f..472d4a551945 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -789,6 +789,32 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, 
struct codegen_context *
                /* dst = *(u64 *)(ul) (src + off) */
                case BPF_LDX | BPF_MEM | BPF_DW:
                case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
+                       /*
+                        * As PTR_TO_BTF_ID that uses BPF_PROBE_MEM mode could 
either be a valid
+                        * kernel pointer or NULL but not a userspace address, 
execute BPF_PROBE_MEM
+                        * load only if addr is kernel address (see 
is_kernel_addr()), otherwise
+                        * set dst_reg=0 and move on.
+                        */
+                       if (BPF_MODE(code) == BPF_PROBE_MEM) {
+                               EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], src_reg, 
off));
+                               if (IS_ENABLED(CONFIG_PPC_BOOK3E_64))
+                                       PPC_LI64(b2p[TMP_REG_2], 
0x8000000000000000ul);
+                               else /* BOOK3S_64 */
+                                       PPC_LI64(b2p[TMP_REG_2], PAGE_OFFSET);
+                               EMIT(PPC_RAW_CMPLD(b2p[TMP_REG_1], 
b2p[TMP_REG_2]));
+                               PPC_BCC(COND_GT, (ctx->idx + 4) * 4);
+                               EMIT(PPC_RAW_LI(dst_reg, 0));
+                               /*
+                                * Check if 'off' is word aligned because 
PPC_BPF_LL()
+                                * (BPF_DW case) generates two instructions if 
'off' is not
+                                * word-aligned and one instruction otherwise.
+                                */
+                               if (BPF_SIZE(code) == BPF_DW && (off & 3))
+                                       PPC_JMP((ctx->idx + 3) * 4);
+                               else
+                                       PPC_JMP((ctx->idx + 2) * 4);
+                       }
+
                        switch (size) {
                        case BPF_B:
                                EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off));
-- 
2.31.1

Reply via email to