Replace trace_foo() with the new trace_invoke_foo() at sites already
guarded by trace_foo_enabled(), avoiding a redundant
static_branch_unlikely() re-evaluation inside the tracepoint.
trace_invoke_foo() calls the tracepoint callbacks directly without
utilizing the static branch again.

Suggested-by: Steven Rostedt <[email protected]>
Suggested-by: Peter Zijlstra <[email protected]>
Signed-off-by: Vineeth Pillai (Google) <[email protected]>
Assisted-by: Claude:claude-sonnet-4-6
---
 io_uring/io_uring.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 0fa844faf2871..68b7656e1547a 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -299,7 +299,7 @@ static __always_inline bool io_fill_cqe_req(struct 
io_ring_ctx *ctx,
        }
 
        if (trace_io_uring_complete_enabled())
-               trace_io_uring_complete(req->ctx, req, cqe);
+               trace_invoke_io_uring_complete(req->ctx, req, cqe);
        return true;
 }
 
-- 
2.53.0


Reply via email to