pnowojski commented on a change in pull request #41:
URL: https://github.com/apache/flink-benchmarks/pull/41#discussion_r755090811
##########
File path: README.md
##########
@@ -31,14 +31,33 @@ There're mainly two ways:
mvn -Dflink.version=<FLINK_VERSION> clean package exec:exec \
-Dbenchmarks="<benchmark_class>"
```
-An example flink version can be -Dflink.version=1.12-SNAPSHOT.
+
+ An example flink version can be -Dflink.version=1.12-SNAPSHOT.
+
+3. Run the uber jar directly like:
+
+ ```
+ java -jar target/benchmarks.jar -rf csv "<benchmark_class>"
+ ```
We also support to run each benchmark once (with only one fork and one
iteration) for testing, with below command:
```
mvn test -P test
```
+## Parameters
+
+There are some built-in params to run different benchmarks, these can be
shown/overridden from the command line.
Review comment:
Missed `params`
##########
File path:
src/main/java/org/apache/flink/state/benchmark/ListStateBenchmark.java
##########
@@ -26,6 +26,7 @@
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.infra.BenchmarkParams;
Review comment:
Why is this needed?
##########
File path:
src/main/java/org/apache/flink/state/benchmark/StateBenchmarkBase.java
##########
@@ -20,15 +20,17 @@
import org.apache.flink.benchmark.BenchmarkBase;
import
org.apache.flink.contrib.streaming.state.benchmark.StateBackendBenchmarkUtils;
import org.apache.flink.runtime.state.KeyedStateBackend;
-
Review comment:
Are you using the same formatting settings as for the main Flink
repository?
https://nightlies.apache.org/flink/flink-docs-master/docs/flinkdev/ide_setup/#code-formatting
##########
File path: README.md
##########
@@ -31,14 +31,33 @@ There're mainly two ways:
mvn -Dflink.version=<FLINK_VERSION> clean package exec:exec \
-Dbenchmarks="<benchmark_class>"
```
-An example flink version can be -Dflink.version=1.12-SNAPSHOT.
+
+ An example flink version can be -Dflink.version=1.12-SNAPSHOT.
+
+3. Run the uber jar directly like:
+
+ ```
+ java -jar target/benchmarks.jar -rf csv "<benchmark_class>"
+ ```
We also support to run each benchmark once (with only one fork and one
iteration) for testing, with below command:
```
mvn test -P test
```
+## Parameters
+
+There are some built-in params to run different benchmarks, these can be
shown/overridden from the command line.
+
+```
+# show all the params combination for the <benchmark_class>
Review comment:
Missed `params`
##########
File path:
src/main/java/org/apache/flink/state/benchmark/StateBenchmarkBase.java
##########
@@ -52,10 +54,25 @@
final ThreadLocalRandom random = ThreadLocalRandom.current();
@Param({"HEAP", "ROCKSDB"})
- protected StateBackendBenchmarkUtils.StateBackendType backendType;
+ private StateBackendBenchmarkUtils.StateBackendType backendType;
+
+ @Param({StateBenchmarkConstants.defaultStateDataDirPath})
+ private String stateDataDirPath;
Review comment:
Won't this in combination with
https://github.com/apache/flink-benchmarks/blob/master/save_jmh_result.py#L72:L76
change the names of the uploaded benchmarks to the [code
speed](http://codespeed.dak8s.net:8000/changes/)?
Do we really want this to be reported like this? Won't this clog the results
and the WebUI in particular?
Secondly, even if we want this, we would have to manually rename all of the
benchmarks in the database to preserve continuity of the results in the graphs.
Maybe it's actually better idea after all, to not use jmh's `@Param` for
this? Maybe simply use an environmental variable?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]