zhaohaidao opened a new issue, #164: URL: https://github.com/apache/fluss-rust/issues/164
### Search before asking - [x] I searched in the [issues](https://github.com/apache/fluss-rust/issues) and found nothing similar. ### Please describe the bug 🐞 ## Summary The consumer process can hang due to a deadlock inside `LogFetchBuffer`. GDB backtraces show a lock-order inversion between `LogFetchBuffer::buffered_buckets` and `LogFetchBuffer::add`. ## Impact `fluss_cpp_consume_table` becomes stuck (no progress), even though the process is still running. ## Evidence (GDB stack excerpts) ``` Thread 1: LogFetchBuffer::buffered_buckets -> holds completed_fetches, waiting on pending_fetches Thread 23: LogFetchBuffer::add -> holds pending_fetches, waiting on completed_fetches ``` Relevant frames (from `all_bt.txt`): - `crates/fluss/src/client/table/log_fetch_buffer.rs:225` in `buffered_buckets` - `crates/fluss/src/client/table/log_fetch_buffer.rs:191` in `add` ## Root Cause `buffered_buckets` acquires locks in this order: 1) `next_in_line_fetch` 2) `completed_fetches` 3) `pending_fetches` `add` acquires locks in this order: 1) `pending_fetches` 2) `completed_fetches` This lock-order inversion can deadlock when `buffered_buckets` holds `completed_fetches` and waits for `pending_fetches`, while `add` holds `pending_fetches` and waits for `completed_fetches`. ### Solution ## Proposed Fix Ensure consistent lock ordering or avoid holding multiple locks at once. A minimal fix is to split `buffered_buckets` into separate critical sections so that only one lock is held at a time. ## Expected Behavior No deadlock; the consumer continues polling and processing records. ## Actual Behavior The process is stuck while threads wait on `parking_lot` mutexes. ### Are you willing to submit a PR? - [x] I'm willing to submit a PR! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
