shibd commented on PR #551:
URL: 
https://github.com/apache/pulsar-client-cpp/pull/551#issuecomment-4035689417

   ## TOCTOU race in `asyncWrite` and `asyncReceive` — the core bug window is 
not fully closed
   
   **Files:** `lib/ClientConnection.h` — `asyncWrite` / `asyncReceive` templates
   
   The PR introduces `pendingOperations_` to let `ConnectionPool::close()` wait 
for in-flight callbacks before tearing down the `io_context`. However, in both 
`asyncWrite` and `asyncReceive`, the counter increment is placed **after** the 
lock is released:
   
   ```cpp
   Lock lock{mutex_};
   if (isClosed()) { return; }
   lock.unlock();
   pendingOperations_++;   // ← window is here
   // async operation starts...
   ```
   
   Between `lock.unlock()` and `pendingOperations_++`, another thread can call 
`close()`, observe `state_ = Disconnected`, and then `ConnectionPool::close()` 
reads `pendingOperations() == 0` — concluding there are no pending operations 
and proceeding to destroy the `io_context`. The async operation then starts on 
a destroyed context, and its callback fires into freed memory. This is the 
exact crash scenario the PR intends to prevent.
   
   **Suggested fix:** Move `pendingOperations_++` to before `lock.unlock()`, so 
the counter is incremented while the mutex is still held and before `close()` 
can observe a clean zero:
   
   ```cpp
   Lock lock{mutex_};
   if (isClosed()) { return; }
   pendingOperations_++;   // ← increment under lock, before close() can race
   lock.unlock();
   // async operation starts...
   ```
   
   This ensures that any concurrent `close()` + `ConnectionPool::close()` 
sequence either (a) observes the non-zero counter and waits, or (b) runs 
entirely before the increment, in which case `isClosed()` returns true and the 
operation is skipped entirely — with no counter leak.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to