AndrewJD79 opened a new issue #12844:
URL: https://github.com/apache/pulsar/issues/12844


   **Describe the bug**
   Implementation of statistics in cpp client have two concurrency issues.
   
   1. ProducerStatsImpl (and ConsumerStatsImpl) classes use a single shared 
lock to protect access to internal data.  The lock is taken on each sent or 
received message. Under high load this shared lock causes signficant contention 
and performance degradation.
   Profiler shows that sending and receiving threads block each-other.
   
   
![original-profiling](https://user-images.githubusercontent.com/2276675/142137028-b1dab92d-d6a4-47c3-84fd-666bccfd188a.png)
   
   Since sending and receving functions access different member subset they 
should be protected by different mutex or other approach should be selected.
   As example after patching issue I've got about 1/3 throughtput improvement. 
As you can see on screenshot below threads are witing on I/O but not on mutexes.
   
![pathed-profiling](https://user-images.githubusercontent.com/2276675/142137475-36f31817-29da-43d5-9ddd-ecbbb4948d8b.png)
   
   
   2. ProducerStatsImpl implementation has races between destructor and 
DeadlineTimer callback. Consider following scenario:
   
   
      1. ProducerStatsImpl  destructor acquire the mutex
      2. DeadlineTimer calls calback flushAndReset and blocked on mutex
      3. ProducerStatsImpl  calls timer.cancel and cancel any pending operation 
but it cannot cancel already executed calbback at step 2
      4. ProducerStatsImpl  destructor release mutex 
      5. DeadlineTimer acquire the mutex 
      6. ProducerStatsImpl  destructor destroy object
      7. DeadlineTimer callback access to deallocated memory
   
   Are you willing accept PR for issue one or both?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to