lukecwik commented on code in PR #22190:
URL: https://github.com/apache/beam/pull/22190#discussion_r918231039


##########
sdks/java/harness/jmh/src/main/java/org/apache/beam/fn/harness/jmh/control/ExecutionStateSamplerBenchmark.java:
##########
@@ -103,33 +104,38 @@ public void tearDown() {
   }
 
   @Benchmark
-  @Threads(1)
-  public void testTinyBundleRunnersCoreStateSampler(RunnersCoreStateSampler 
state)
+  @Threads(10)

Review Comment:
   I was hoping to keep all the variants the same to be able to compare 
directly across them so any updates like `@Threads` should be applied to them 
all.
   
   It would make sense to use like 512 threads for the tiny ones and 16 for the 
large ones and to have a single sampler instance, one tracker per thread, and a 
few states per tracker since that emulates the current setup within the 
Dataflow worker and SDK harness.



##########
runners/core-java/src/main/java/org/apache/beam/runners/core/metrics/ExecutionStateTracker.java:
##########
@@ -298,7 +304,17 @@ public long getNextLullReportMs() {
     return nextLullReportMs;
   }
 
-  protected void takeSample(long millisSinceLastSample) {
+  void takeSample(long millisSinceLastSample) {
+    if (SAMPLING_UPDATER.compareAndSet(this, 0, 1)) {

Review Comment:
   Do we need sampling/SAMPLING_UPDATER because you wanted to be able to have 
multiple threads activating/deactivating states within a single tracker at the 
same time?
   
   Note there is only one ExecutionStateSampler thread and each tracker is 
associated with a single bundle at a time. To emulate this behavior we would 
want to have one sampler that is shared by N trackers each with their own 
states where N is the number of concurrent threads you want to use.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to