Re: [PATCH RFCv3 00/23] uprobes: Add support to optimize usdt probes on x86_64
On Fri, Apr 04, 2025 at 01:36:13PM -0700, Andrii Nakryiko wrote: > On Thu, Mar 20, 2025 at 4:42 AM Jiri Olsa wrote: > > > > hi, > > this patchset adds support to optimize usdt probes on top of 5-byte > > nop instruction. > > > > The generic approach (optimize all uprobes) is hard due to emulating > > possible multiple original instructions and its related issues. The > > usdt case, which stores 5-byte nop seems much easier, so starting > > with that. > > > > The basic idea is to replace breakpoint exception with syscall which > > is faster on x86_64. For more details please see changelog of patch 8. > > > > The run_bench_uprobes.sh benchmark triggers uprobe (on top of different > > original instructions) in a loop and counts how many of those happened > > per second (the unit below is million loops). > > > > There's big speed up if you consider current usdt implementation > > (uprobe-nop) compared to proposed usdt (uprobe-nop5): > > > > current: > > usermode-count : 152.604 ± 0.044M/s > > syscall-count : 13.359 ± 0.042M/s > > --> uprobe-nop :3.229 ± 0.002M/s > > uprobe-push:3.086 ± 0.004M/s > > uprobe-ret :1.114 ± 0.004M/s > > uprobe-nop5:1.121 ± 0.005M/s > > uretprobe-nop :2.145 ± 0.002M/s > > uretprobe-push :2.070 ± 0.001M/s > > uretprobe-ret :0.931 ± 0.001M/s > > uretprobe-nop5 :0.957 ± 0.001M/s > > > > after the change: > > usermode-count : 152.448 ± 0.244M/s > > syscall-count : 14.321 ± 0.059M/s > > uprobe-nop :3.148 ± 0.007M/s > > uprobe-push:2.976 ± 0.004M/s > > uprobe-ret :1.068 ± 0.003M/s > > --> uprobe-nop5:7.038 ± 0.007M/s > > uretprobe-nop :2.109 ± 0.004M/s > > uretprobe-push :2.035 ± 0.001M/s > > uretprobe-ret :0.908 ± 0.001M/s > > uretprobe-nop5 :3.377 ± 0.009M/s > > > > I see bit more speed up on Intel (above) compared to AMD. The big nop5 > > speed up is partly due to emulating nop5 and partly due to optimization. > > > > The key speed up we do this for is the USDT switch from nop to nop5: > > uprobe-nop :3.148 ± 0.007M/s > > uprobe-nop5:7.038 ± 0.007M/s > > > > > > rfc v3 changes: > > - I tried to have just single syscall for both entry and return uprobe, > > but it turned out to be slower than having two separated syscalls, > > probably due to extra save/restore processing we have to do for > > argument reg, I see differences like: > > > > 2 syscalls: uprobe-nop5:7.038 ± 0.007M/s > > 1 syscall: uprobe-nop5:6.943 ± 0.003M/s > > > > - use instructions (nop5/int3/call) to determine the state of the > > uprobe update in the process > > - removed endbr instruction from uprobe trampoline > > - seccomp changes > > > > pending todo (or follow ups): > > - shadow stack fails for uprobe session setup, will fix it in next version > > - use PROCMAP_QUERY in tests > > - alloc 'struct uprobes_state' for mm_struct only when needed [Andrii] > > All the pending TODO stuff seems pretty minor. So is there anything > else holding your patch set from graduating out of RFC status? > > David's uprobe_write_opcode() patch set landed, so you should be ready > to rebase and post a proper v1 now, right? > > Performance wins are huge, looking forward to this making it into the > kernel soon! I just saw notification that those changes are on the way to mm tree, I have the rebase ready, want to post it this week, could be v1 ;-) jirka > > > > > thanks, > > jirka > > > > > > Cc: Eyal Birger > > Cc: k...@kernel.org > > --- > > Jiri Olsa (23): > > uprobes: Rename arch_uretprobe_trampoline function > > uprobes: Make copy_from_page global > > uprobes: Move ref_ctr_offset update out of uprobe_write_opcode > > uprobes: Add uprobe_write function > > uprobes: Add nbytes argument to uprobe_write_opcode > > uprobes: Add orig argument to uprobe_write and uprobe_write_opcode > > uprobes: Remove breakpoint in unapply_uprobe under mmap_write_lock > > uprobes/x86: Add uprobe syscall to speed up uprobe > > uprobes/x86: Add mapping for optimized uprobe trampolines > > uprobes/x86: Add support to emulate nop5 instruction > > uprobes/x86: Add support to optimize uprobes > > selftests/bpf: Use 5-byte nop for x86 usdt probes > > selftests/bpf: Reorg the uprobe_syscall test function > > selftests/bpf: Rename uprobe_syscall_executed prog to > > test_uretprobe_multi > > selftests/bpf: Add uprobe/usdt syscall tests > > selftests/bpf: Add hit/attach/detach race optimized uprobe test > > selftests/bpf: Add uprobe syscall sigill signal test > > selftests/bpf: Add optimized usdt variant for basic usdt test > > selftests/bpf: Add uprobe_regs_equal test > > selftests/bpf: Change test_uretprobe
Re: [PATCH RFCv3 00/23] uprobes: Add support to optimize usdt probes on x86_64
On Thu, Mar 20, 2025 at 4:42 AM Jiri Olsa wrote: > > hi, > this patchset adds support to optimize usdt probes on top of 5-byte > nop instruction. > > The generic approach (optimize all uprobes) is hard due to emulating > possible multiple original instructions and its related issues. The > usdt case, which stores 5-byte nop seems much easier, so starting > with that. > > The basic idea is to replace breakpoint exception with syscall which > is faster on x86_64. For more details please see changelog of patch 8. > > The run_bench_uprobes.sh benchmark triggers uprobe (on top of different > original instructions) in a loop and counts how many of those happened > per second (the unit below is million loops). > > There's big speed up if you consider current usdt implementation > (uprobe-nop) compared to proposed usdt (uprobe-nop5): > > current: > usermode-count : 152.604 ± 0.044M/s > syscall-count : 13.359 ± 0.042M/s > --> uprobe-nop :3.229 ± 0.002M/s > uprobe-push:3.086 ± 0.004M/s > uprobe-ret :1.114 ± 0.004M/s > uprobe-nop5:1.121 ± 0.005M/s > uretprobe-nop :2.145 ± 0.002M/s > uretprobe-push :2.070 ± 0.001M/s > uretprobe-ret :0.931 ± 0.001M/s > uretprobe-nop5 :0.957 ± 0.001M/s > > after the change: > usermode-count : 152.448 ± 0.244M/s > syscall-count : 14.321 ± 0.059M/s > uprobe-nop :3.148 ± 0.007M/s > uprobe-push:2.976 ± 0.004M/s > uprobe-ret :1.068 ± 0.003M/s > --> uprobe-nop5:7.038 ± 0.007M/s > uretprobe-nop :2.109 ± 0.004M/s > uretprobe-push :2.035 ± 0.001M/s > uretprobe-ret :0.908 ± 0.001M/s > uretprobe-nop5 :3.377 ± 0.009M/s > > I see bit more speed up on Intel (above) compared to AMD. The big nop5 > speed up is partly due to emulating nop5 and partly due to optimization. > > The key speed up we do this for is the USDT switch from nop to nop5: > uprobe-nop :3.148 ± 0.007M/s > uprobe-nop5:7.038 ± 0.007M/s > > > rfc v3 changes: > - I tried to have just single syscall for both entry and return uprobe, > but it turned out to be slower than having two separated syscalls, > probably due to extra save/restore processing we have to do for > argument reg, I see differences like: > > 2 syscalls: uprobe-nop5:7.038 ± 0.007M/s > 1 syscall: uprobe-nop5:6.943 ± 0.003M/s > > - use instructions (nop5/int3/call) to determine the state of the > uprobe update in the process > - removed endbr instruction from uprobe trampoline > - seccomp changes > > pending todo (or follow ups): > - shadow stack fails for uprobe session setup, will fix it in next version > - use PROCMAP_QUERY in tests > - alloc 'struct uprobes_state' for mm_struct only when needed [Andrii] All the pending TODO stuff seems pretty minor. So is there anything else holding your patch set from graduating out of RFC status? David's uprobe_write_opcode() patch set landed, so you should be ready to rebase and post a proper v1 now, right? Performance wins are huge, looking forward to this making it into the kernel soon! > > thanks, > jirka > > > Cc: Eyal Birger > Cc: k...@kernel.org > --- > Jiri Olsa (23): > uprobes: Rename arch_uretprobe_trampoline function > uprobes: Make copy_from_page global > uprobes: Move ref_ctr_offset update out of uprobe_write_opcode > uprobes: Add uprobe_write function > uprobes: Add nbytes argument to uprobe_write_opcode > uprobes: Add orig argument to uprobe_write and uprobe_write_opcode > uprobes: Remove breakpoint in unapply_uprobe under mmap_write_lock > uprobes/x86: Add uprobe syscall to speed up uprobe > uprobes/x86: Add mapping for optimized uprobe trampolines > uprobes/x86: Add support to emulate nop5 instruction > uprobes/x86: Add support to optimize uprobes > selftests/bpf: Use 5-byte nop for x86 usdt probes > selftests/bpf: Reorg the uprobe_syscall test function > selftests/bpf: Rename uprobe_syscall_executed prog to > test_uretprobe_multi > selftests/bpf: Add uprobe/usdt syscall tests > selftests/bpf: Add hit/attach/detach race optimized uprobe test > selftests/bpf: Add uprobe syscall sigill signal test > selftests/bpf: Add optimized usdt variant for basic usdt test > selftests/bpf: Add uprobe_regs_equal test > selftests/bpf: Change test_uretprobe_regs_change for uprobe and > uretprobe > selftests/bpf: Add 5-byte nop uprobe trigger bench > seccomp: passthrough uprobe systemcall without filtering > selftests/seccomp: validate uprobe syscall passes through seccomp > > arch/arm/probes/uprobes/core.c | 2 +- > arch/x86/entry/syscalls/syscall_64.tbl | 1 + > arch/x86/include/asm/uprobes.h
Re: [PATCH RFCv3 00/23] uprobes: Add support to optimize usdt probes on x86_64
On Thu, Mar 20, 2025 at 01:23:44PM +0100, Oleg Nesterov wrote: > On 03/20, Jiri Olsa wrote: > > > > hi, > > this patchset adds support to optimize usdt probes on top of 5-byte > > nop instruction. > > Just in case... This series conflicts with (imo very important) changes > from David, > > [PATCH v2 0/3] kernel/events/uprobes: uprobe_write_opcode() rewrite > https://lore.kernel.org/all/20250318221457.3055598-1-da...@redhat.com/ > > I think they should be merged first. ok, I'll check on those thanks, jirka > > (and I am not sure yet, but it seems that we should cleanup (fix?) the > update_ref_ctr() logic before other changes). > > Oleg. >
Re: [PATCH RFCv3 00/23] uprobes: Add support to optimize usdt probes on x86_64
On 03/20, Jiri Olsa wrote: > > hi, > this patchset adds support to optimize usdt probes on top of 5-byte > nop instruction. Just in case... This series conflicts with (imo very important) changes from David, [PATCH v2 0/3] kernel/events/uprobes: uprobe_write_opcode() rewrite https://lore.kernel.org/all/20250318221457.3055598-1-da...@redhat.com/ I think they should be merged first. (and I am not sure yet, but it seems that we should cleanup (fix?) the update_ref_ctr() logic before other changes). Oleg.