On 3/6/26 7:30 AM, Michal Luczaj wrote:
bpf_iter_unix_seq_show() may deadlock when lock_sock_fast() takes the fast
path and the iter prog attempts to update a sockmap. Which ends up spinning
at sock_map_update_elem()'s bh_lock_sock():
WARNING: possible recursive locking detected
test_progs/1393 is trying to acquire lock:
ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at:
sock_map_update_elem+0xdb/0x1f0
but task is already holding lock:
ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(slock-AF_UNIX);
lock(slock-AF_UNIX);
*** DEADLOCK ***
May be due to missing lock nesting notation
4 locks held by test_progs/1393:
#0: ffff88814b59c790 (&p->lock){+.+.}-{4:4}, at: bpf_seq_read+0x59/0x10d0
#1: ffff88811ec25fd8 (sk_lock-AF_UNIX){+.+.}-{0:0}, at:
bpf_seq_read+0x42c/0x10d0
#2: ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at:
__lock_sock_fast+0x37/0xe0
#3: ffffffff85a6a7c0 (rcu_read_lock){....}-{1:3}, at:
bpf_iter_run_prog+0x51d/0xb00
Call Trace:
dump_stack_lvl+0x5d/0x80
print_deadlock_bug.cold+0xc0/0xce
__lock_acquire+0x130f/0x2590
lock_acquire+0x14e/0x2b0
_raw_spin_lock+0x30/0x40
sock_map_update_elem+0xdb/0x1f0
bpf_prog_2d0075e5d9b721cd_dump_unix+0x55/0x4f4
bpf_iter_run_prog+0x5b9/0xb00
bpf_iter_unix_seq_show+0x1f7/0x2e0
bpf_seq_read+0x42c/0x10d0
vfs_read+0x171/0xb20
ksys_read+0xff/0x200
do_syscall_64+0x6b/0x3a0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
Suggested-by: Kuniyuki Iwashima <[email protected]>
Suggested-by: Martin KaFai Lau <[email protected]>
Fixes: 2c860a43dd77 ("bpf: af_unix: Implement BPF iterator for UNIX domain
socket.")
Signed-off-by: Michal Luczaj <[email protected]>
---
net/unix/af_unix.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 3756a93dc63a..3d2cfb4ecbcd 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -3729,15 +3729,14 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq,
void *v)
struct bpf_prog *prog;
struct sock *sk = v;
uid_t uid;
- bool slow;
int ret;
if (v == SEQ_START_TOKEN)
return 0;
- slow = lock_sock_fast(sk);
+ lock_sock(sk);
- if (unlikely(sk_unhashed(sk))) {
+ if (unlikely(sock_flag(sk, SOCK_DEAD))) {
ret = SEQ_SKIP;
goto unlock;
}
Switching to lock_sock() fixes the deadlock, but it does not provide mutual
exclusion with unix_release_sock(), which uses unix_state_lock() exclusively
and does not touch lock_sock() at all. So a dying socket can still reach the
BPF prog concurrently with unix_release_sock() running on another CPU.
Both SOCK_DEAD and the clearing of unix_peer(sk) happen under
unix_state_lock() in unix_release_sock(). Without taking unix_state_lock()
before the SOCK_DEAD check, there is a window:
iter unix_release_sock()
--- lock_sock(sk)
SOCK_DEAD == 0(check passes)
unix_state_lock(sk)
unix_peer(sk) = NULL
sock_set_flag(sk, SOCK_DEAD)
unix_state_unlock(sk)
BPF prog runs
→ accesses unix_peer(sk) == NULL → crash
This was not raised in the v2 discussion.
The natural fix is to check SOCK_DEAD under unix_state_lock(). However,
holding unix_state_lock() throughout BPF prog execution would conflict with
patch 5: sock_map_sk_acquire_fast() also takes unix_state_lock() for AF_UNIX
sockets, resulting in a recursive spinlock deadlock.
Kuniyuki, Martin — what is the right approach here?
@@ -3747,7 +3746,7 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq,
void *v)
prog = bpf_iter_get_info(&meta, false);
ret = unix_prog_seq_show(prog, &meta, v, uid);
unlock:
- unlock_sock_fast(sk, slow);
+ release_sock(sk);
return ret;
}