ting-xu opened a new issue, #1455:
URL: https://github.com/apache/pulsar-client-go/issues/1455

   **Is your feature request related to a problem? Please describe.**
   In following scenario, the program will run into deadlock, at the same time 
consume very high CPU.
   - in program application layer there is an in-process dispatcher to process 
messages from pulsar client consumers, its purporse is to increase processing 
parallelism while keep message ordering (use message key to partition). it has 
higher buffer size than pulsar client consumer setting, use many goroutines 
(number of partitions) to process concurrently, messages with same key go to 
same partition and processed by same goroutine sequentially
   - in message application process, the logic is to do filtering, 
transforming, publish derived new messages, then ack current message, finish 
process.
   - the consumers and producers all use one pulsar client instance, consumers 
and producers use same one internal.MemoryLimitController to 
ReserveMemory/ReleaseMemory
   - when subscribe to topics with messages fast enough (in our DC the topic 
has 10K+messages per second),  suddenlly the program run into deadlock state: 
all buffered messages cause mem usage reach to limit, in all process 
goroutines, producer try to publish new message, but cannot reserve memory, 
(internalSendAsync and option is blocking when queue is full), and consumers 
reserved memory cannot be released because these goroutines don't make progress 
any more, so no message can be acked.
   - all process goroutines are trying use producer.SendAsync() to publish, 
stuck in memoryLimitController.ReserveMemory() method, the loop ( for 
!m.TryReserveMemory(size) ) is now a infinite loop because newUsage always 
larger than limit,  no chance for current usage to decrease.
   
   **Describe the solution you'd like**
   in client sturct, use two memLimit, one for consumers, one for producers. In 
ClientOptions, use two MemoryLimitBytes, one for consumers, one for producers.
   
   **Describe alternatives you've considered**
   Currently we set MemoryLimitBytes=-1 to avoid such issue, but it's not a 
ideal solution.
   When not aware such isssue, use default config 64M and face similar 
scenarios like above, it's hard to know root cause when deadlock happen
   
   **Additional context**
   In our environment, it seems compared to v0.14.0, programs use v0.18.0 are 
easier to enter the above deadlock state
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to