divijvaidya commented on PR #12890:
URL: https://github.com/apache/kafka/pull/12890#issuecomment-1352933410

   Hey @ijuma - thank you for your question.
   
   1. I used [Amazon CodeGuru 
Profiler](https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html).
 I ran the test again with with 
[async-profiler](https://github.com/jvm-profiling-tools/async-profiler) which 
should not suffer from safepoint bias. Please find the below image for the 
usage of the function. It demonstrates that HashCode evaluation is not the 
major component but calculating the size takes quite some time. After this 
change, the flamegraph does not have this stack frame at all (since we don't 
parse the header again). The flamegraphs is in html format so I can't add it 
here but will be happy to share with you offline in case you are interested. I 
am running this with Correto JDK 11 and `./profiler.sh -d 500 -i 40ms -e 
cycles` as the profiler configuration.
   
   ![Screenshot 2022-12-15 at 12 16 
04](https://user-images.githubusercontent.com/71267/207845907-aee8a6a2-ce06-4203-9acc-55e6a7272aee.png)
   
   2. Irrespective of the CPU usage optimisation that this change brings, I 
think that this is a good change because it avoids parsing the request header 
twice.
   
   3. The actual CPU standard deviation is more than 2-3% during the duration 
of the test and hence, I cannot reliably say that I observed lower CPU usage 
after this change. Latency measurement is also not reliable because the 
bottleneck is thread contention on partition lock in UnifiedLog and hence, this 
change won't change the latency.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to