zzzming commented on issue #770:
URL:
https://github.com/apache/pulsar-client-go/issues/770#issuecomment-1164758611
In partitionConsumer, there is an event loop (eventsCh) handles events
asynchronously. However, blocking can still take place because of two reasons
in the design.
1. The `eventsCh` is a buffered channel at the size of 10. This is means
sending to the channel will be blocked if the channel is full. On the surface,
that is the reason this issue is observed.
2. The event loop is processing events synchronously. It is the second `for`
loop in the function runEventsLoop. The code looks like this block
```
for {
for i := range pc.eventsCh {
switch v := i.(type) {
case *ackRequest:
pc.internalAck(v)
case *redeliveryRequest:
pc.internalRedeliver(v)
case *unsubscribeRequest:
pc.internalUnsubscribe(v)
case *getLastMsgIDRequest:
pc.internalGetLastMessageID(v)
case *seekRequest:
pc.internalSeek(v)
case *seekByTimeRequest:
pc.internalSeekByTime(v)
case *closeRequest:
pc.internalClose(v)
return
}
}
```
There is no sufficient information here to diagnose which call has blocked.
But at least 10 event processing were blocked.
@wolfstudy What is your opinion we run these internal calls as a separate
goroutine without blocking the event loops? Like
```
for i := range pc.eventsCh {
switch v := i.(type) {
case *ackRequest:
go pc.internalAck(v)
case *redeliveryRequest:
go pc.internalRedeliver(v)
case *unsubscribeRequest:
go pc.internalUnsubscribe(v)
case *getLastMsgIDRequest:
go pc.internalGetLastMessageID(v)
...
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]