dferstay opened a new pull request #408:
URL: https://github.com/apache/pulsar-client-go/pull/408


   Previously, we read the system clock twice for each event unless we
   switched partitions early due to reaching `maxBatchingMessages` or
   `maxBatchingSize`.
   
   Now, we read the clock once per event.
   
   This improves performance (especially for larger batch sizes) as the
   router function is called for every message produced.
   
   A bench test of the default router was added; results are below:
   ```
   name             old time/op    new time/op    delta
   DefaultRouter       106ns ± 0%      61ns ± 0%  -42.64%  (p=0.029 n=4+4)
   DefaultRouter-4     106ns ± 0%      61ns ± 0%     ~     (p=0.079 n=4+5)
   ```
   
   Signed-off-by: Daniel Ferstay <[email protected]>
   
   
   ### Motivation
   
   The default_router reads the system clock twice for every message routed in 
the production code path.  When testing at scale with larger batch sizes, the 
amount of time spent reading the system clock during routing shows up in pprof 
profiles.
   
   ### Modifications
   
   This change modifies the default router to read the system clock once for 
every message routed.
   A bench test of the default router was added to verify that the change 
yields a speedup.
   
   ### Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ### Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): no
     - The public API: no
     - The schema: no
     - The default values of configurations: no
     - The wire protocol: no
   
   ### Documentation
   
     - Does this pull request introduce a new feature? no
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to