It turns out that there are BPF use cases that rely on nesting RCU
Tasks Trace readers.  These use cases are well-served by the old
rcu_read_lock_trace() and rcu_read_unlock_trace() functions that maintain
a nesting counter in the task_struct structure.  But these use cases incur
a performance penalty when using the shiny new rcu_read_lock_tasks_trace()
and rcu_read_unlock_tasks_trace() functions, which nest in the same way
that SRCU does.

This means that rcu_read_lock_trace() and rcu_read_unlock_trace()
will be with us for some time.  Therefore, remove the checkpatch.pl
deprecation.

Reported-by: Puranjay Mohan <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Andy Whitcroft <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Dwaipayan Ray <[email protected]>
Cc: Lukas Bulwahn <[email protected]>
---
 scripts/checkpatch.pl | 2 --
 1 file changed, 2 deletions(-)

diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 0492d6afc9a1fc..7a0aa139a2424a 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -865,8 +865,6 @@ our %deprecated_apis = (
        "DEFINE_IDR"                            => "DEFINE_XARRAY",
        "idr_init"                              => "xa_init",
        "idr_init_base"                         => "xa_init_flags",
-       "rcu_read_lock_trace"                   => "rcu_read_lock_tasks_trace",
-       "rcu_read_unlock_trace"                 => 
"rcu_read_unlock_tasks_trace",
 );
 
 #Create a search pattern for all these strings to speed up a loop below
-- 
2.40.1


Reply via email to