lbradstreet commented on a change in pull request #9129:
URL: https://github.com/apache/kafka/pull/9129#discussion_r540361683



##########
File path: jmh-benchmarks/README.md
##########
@@ -1,10 +1,69 @@
-### JMH-Benchmark module
+### JMH-Benchmarks module
 
 This module contains benchmarks written using 
[JMH](https://openjdk.java.net/projects/code-tools/jmh/) from OpenJDK.
 Writing correct micro-benchmarks in Java (or another JVM language) is 
difficult and there are many non-obvious pitfalls (many
 due to compiler optimizations). JMH is a framework for running and analyzing 
benchmarks (micro or macro) written in Java (or
 another JVM language).
 
+### Running benchmarks
+
+If you want to set specific JMH flags or only run certain benchmarks, passing 
arguments via
+gradle tasks is cumbersome. These are simplified by the provided `jmh.sh` 
script.
+
+The default behavior is to run all benchmarks:
+
+    ./jmh-benchmarks/jmh.sh
+    
+Pass a pattern or name after the command to select the benchmarks:
+
+    ./jmh-benchmarks/jmh.sh LRUCacheBenchmark
+
+Check which benchmarks that match the provided pattern:
+
+    ./jmh-benchmarks/jmh.sh -l LRUCacheBenchmark
+
+Run a specific test and override the number of forks, iterations and warm-up 
iteration to `2`:
+
+    ./jmh-benchmarks/jmh.sh -f 2 -i 2 -wi 2 LRUCacheBenchmark
+
+Run a specific test with async and GC profilers on Linux and flame graph 
output:
+
+    ./jmh-benchmarks/jmh.sh -prof gc -prof 
async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph LRUCacheBenchmark
+
+The following sections cover async profiler and GC profilers in more detail.
+
+### Using JMH with async profiler
+
+It's good practice to check profiler output for microbenchmarks in order to 
verify that they are valid.
+JMH includes 
[async-profiler](https://github.com/jvm-profiling-tools/async-profiler) 
integration that makes this easy:
+
+    ./jmh-benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so
+
+With flame graph output (the semicolon is escaped to ensure it is not treated 
as a command separator):
+
+    ./jmh-benchmarks/jmh.sh -prof 
async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph
+
+A number of arguments can be passed to configure async profiler, run the 
following for a description:
+
+    ./jmh-benchmarks/jmh.sh -prof async:help
+
+### Using JMH GC profiler
+
+It's good practice to run your benchmark with `-prof gc` to measure its 
allocation rate:
+
+    ./jmh-benchmarks/jmh.sh -prof gc
+
+Of particular importance is the `norm` alloc rates, which measure the 
allocations per operation rather than allocations
+per second which can increase when you have make your code faster.
+
+### Running JMH outside of gradle
+
+The JMH benchmarks can be run outside of gradle as you would with any 
executable jar file:
+
+    java -jar 
<kafka-repo-dir>/jmh-benchmarks/build/libs/kafka-jmh-benchmarks-all.jar -f2 
LRUCacheBenchmark
+
+### Writing benchmarks

Review comment:
       Could we include a short section here about what should be put into a PR 
that has been benchmarked?
   
   I'm thinking:
   1. Benchmark comparisons for the code before and after the change.
   1. `-prof gc` results.
   2. An example async profile from at least one run.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to