dferstay opened a new pull request #539: URL: https://github.com/apache/pulsar-client-go/pull/539
### Motivation While working on https://github.com/apache/pulsar-client-go/pull/535 I noticed that it is possible for a panic() to occur while an internal/connection instance is in the process of closing. The race is as follows: - T1: calls SendRequestNoWait(), checks the connection state, and prepares to enter the select statement - T2: calls TriggerClose() closes cnx and the closeCh - T3: run() go-routine for processing incomingRequestsCh drops into case <-closeCh: and calls failLeftRequestsWhenClose() which drains and closes incomingRequestsCh - T1: resumes and drops into the select where both closeCh and incomingRequestsCh are closed. When two cases of a `select` are valid, the case executed is chosen at random; see https://tour.golang.org/concurrency/5 ### Modifications This commit introduces a connectionClosing state and a wait group to track incomingRequestCh writes in the SendRequest() and SendRequestNoWait() methods. * TriggerClose() moves the connection into the connectionClosing state before the closeCh is closed. * The failLeftRequestsWhenClosed() method waits on the waitgroup for outstanding SendRequest() methods to complete before it closes the incomingRequestsCh * The SendRequest() methods first add to the waitgroup before checking the connection state; if the state is either closing or closed, the SendRequest() methods returns ErrConnectionClosed. With the above it is not possible for a thread to attempt to write to the incomingRequestsCh without being tracked by the waitgroup, and the incomingRequestsCh will not be closed until writes to the incomingRequestsCh have completed. ### Verifying this change This change is covered by existing tests that exercise the internal/connection lifecycle. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
