Gleiphir2769 opened a new pull request, #976: URL: https://github.com/apache/pulsar-client-go/pull/976
Master Issue: #927 ### Motivation **Note: This is part of the work for [PIP 74](https://github.com/apache/pulsar/wiki/PIP-74%3A-Pulsar-client-memory-limits) in go client.** The memory limit in consumer side relys on flow control. So we need to firstly support auto scaled consumer receiver queue before we implement the consumer side memory limitation. In order to implement this feature, I refactored the `dispatcher()`. Because the original logic of `dispatcher()` is a bit wired. https://github.com/apache/pulsar-client-go/blob/75d2df3b7d1d1d04fb660a1b6c11ede1d2f161bf/pulsar/consumer_partition.go#L1241-L1255 `pc.queueCh` pases []*message to `dispatcher`. In order to handle []*message, `dispatcher()` defined `queueCh` `messageCh` and `nextMessage` out of the `select`. I think it's hardly to understand how the loop works or extend `dispatcher()`. So I changed the `pc.queueCh` from `chan []*message` to `chan *message` and split the original loop into control loop ( close/new connect/clear ) and data loop ( receive messages from `pc.queueCh` ). More details in #927 . ### Modifications - Refactor the `dispatcher()` loop. - Add `AutoScaledReceiverQueueSize` option for `Consumer` ### Verifying this change - [x] Make sure that the change passes the CI checks. ### Does this pull request potentially affect one of the following parts: *If `yes` was chosen, please highlight the changes* - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API: (yes / **no**) - The schema: (yes / **no** / don't know) - The default values of configurations: (yes / **no**) - The wire protocol: (yes / **no**) ### Documentation - Does this pull request introduce a new feature? (**yes** / no) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
