dferstay opened a new pull request #694: URL: https://github.com/apache/pulsar-client-go/pull/694
### Motivation Previously, we used atomic operations to read and update parts of the default router state. Unfortunately, the reads and updates could race under concurrent calls which leads to unnecessary clock reads and an associated slowdown in performance. ### Modifications Now, we use atomic addition to increment the message count and batch size. This removes the race condition by ensuring that each go-routine will have a unique messageCount, and hence only one will perform the clock read. ### Verifying this change - [ ] Make sure that the change passes the CI checks. Run the default router unit tests to verify correctness. Run the default router bench parallel bench tests (https://github.com/apache/pulsar-client-go/pull/693) and verify the performance speedup; results before and after below: ``` name old time/op new time/op delta DefaultRouterParallel 14.7ns ± 1% 14.8ns ± 2% ~ (p=0.459 n=9+8) DefaultRouterParallel-2 55.0ns ±13% 41.9ns ± 0% -23.86% (p=0.000 n=10+7) DefaultRouterParallel-4 53.5ns ± 9% 44.1ns ± 8% -17.68% (p=0.000 n=10+9) DefaultRouterParallel-8 54.2ns ± 8% 53.2ns ± 3% ~ (p=1.000 n=10+8) DefaultRouterParallel-16 56.4ns ±21% 51.3ns ± 0% ~ (p=0.165 n=10+8) ``` The large variance in the `DefaultRouterParallel-2` and `DefaultRouterParallel-16` old test results is due to the nature of the race described above; with some test runs reading the system clock more than others. ### Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): No - The public API: No - The schema: No - The default values of configurations: No - The wire protocol: No ### Documentation - Does this pull request introduce a new feature? No -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
