wolfstudy opened a new issue #687:
URL: https://github.com/apache/pulsar-client-go/issues/687
Currently, when sending and consuming messages, we use eventsCh to receive
command requests for message sending and receiving:
```
func (p *partitionProducer) runEventsLoop() {
for {
select {
case i := <-p.eventsChan:
switch v := i.(type) {
case *sendRequest:
p.internalSend(v)
case *flushRequest:
p.internalFlush(v)
case *closeProducer:
p.internalClose(v)
return
}
case <-p.connectClosedCh:
p.reconnectToBroker()
case <-p.batchFlushTicker.C:
if p.batchBuilder.IsMultiBatches() {
p.internalFlushCurrentBatches()
} else {
p.internalFlushCurrentBatch()
}
}
}
}
```
This looks OK under normal circumstances, but in extreme cases they may
affect each other. For example, for send command, we have a parameter of
maxPendingMessages locally, which is used as the size of the eventsCh
cache(default: 1000):
```
eventsChan: make(chan interface{}, maxPendingMessages),
```
Assuming that the local maxPendingMessages reaches the default threshold at
this time, then the channel will enter a blocking state (in fact, this is
completely possible), at this time related flush or close requests will also be
blocked by this eventsCh, because they use the same channel(`eventsCh`).
Once runEventsLoop is blocked, it will cause the following series of
problems:
1. send message TimoutError

2. the `receiveCommand()` stuck and the gorutine is `gopark`

[pprof.gateway.goroutine.008.pb.gz](https://github.com/apache/pulsar-client-go/files/7767543/pprof.gateway.goroutine.008.pb.gz)
In this case, when we try to `bin/pulsar-admin topics unload` this topic, it
doesn't work, because in fact it is the block inside the Go SDK that caused the
**sending timeout**. At this point, assuming that the Go SDK is restarted, this
service can be restored immediately.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]