charlesconnell commented on PR #6901:
URL: https://github.com/apache/hbase/pull/6901#issuecomment-2798813245

   Sure, here's a microbenchmark:
   
   ```
   
   @State(Scope.Benchmark)
   public class ClosureBenchmark {
   
     @Param({ "10000" })
     public int loops;
   
     @Param({ "true", "false" })
     public boolean createNewClosure;
   
     @Benchmark
     public void test(Blackhole blackhole) {
       IntConsumer savedClosure = x -> {
         // Do some work that captures a reference to a variable from outside 
the lambda, in this case blackhole,
         // thus forcing the allocation of a closure.
         blackhole.consume(x);
       };
       for (int i = 0; i < loops; i++) {
         if (createNewClosure) {
           IntConsumer newClosure = x -> {
             // Do some work that captures a reference to a variable from 
outside the lambda, in this case blackhole,
             // thus forcing the allocation of a closure.
             // But the work inside the lambda doesn't change based on the 
value of i,
             // so in theory the compiler could avoid creating a new closure on 
each loop iteration
             blackhole.consume(x);
           };
           blackhole.consume(newClosure);
         } else {
           blackhole.consume(savedClosure);
         }
       }
     }
   
     public static void main(String[] args) throws RunnerException, IOException 
{
       org.openjdk.jmh.Main.main(args);
     }
   }
   ```
   
   which produced these results for me:
   
   ```
   
   Benchmark                                               (createNewClosure)  
(loops)   Mode  Cnt       Score     Error   Units
   ClosureBenchmark.test                                                 true   
 10000  thrpt   25   33571.747 ± 331.844   ops/s
   ClosureBenchmark.test:·gc.alloc.rate                                  true   
 10000  thrpt   25    4876.137 ±  48.299  MB/sec
   ClosureBenchmark.test:·gc.alloc.rate.norm                             true   
 10000  thrpt   25  160003.792 ±   0.577    B/op
   ClosureBenchmark.test:·gc.churn.G1_Eden_Space                         true   
 10000  thrpt   25    4875.828 ±  55.586  MB/sec
   ClosureBenchmark.test:·gc.churn.G1_Eden_Space.norm                    true   
 10000  thrpt   25  159993.999 ± 945.061    B/op
   ClosureBenchmark.test:·gc.churn.G1_Survivor_Space                     true   
 10000  thrpt   25       0.006 ±   0.001  MB/sec
   ClosureBenchmark.test:·gc.churn.G1_Survivor_Space.norm                true   
 10000  thrpt   25       0.201 ±   0.030    B/op
   ClosureBenchmark.test:·gc.count                                       true   
 10000  thrpt   25    1423.000            counts
   ClosureBenchmark.test:·gc.time                                        true   
 10000  thrpt   25     865.000                ms
   ClosureBenchmark.test                                                false   
 10000  thrpt   25   37243.796 ± 233.865   ops/s
   ClosureBenchmark.test:·gc.alloc.rate                                 false   
 10000  thrpt   25       0.541 ±   0.004  MB/sec
   ClosureBenchmark.test:·gc.alloc.rate.norm                            false   
 10000  thrpt   25      16.015 ±   0.022    B/op
   ClosureBenchmark.test:·gc.churn.G1_Eden_Space                        false   
 10000  thrpt   25       0.761 ±   1.164  MB/sec
   ClosureBenchmark.test:·gc.churn.G1_Eden_Space.norm                   false   
 10000  thrpt   25      22.485 ±  34.382    B/op
   ClosureBenchmark.test:·gc.churn.G1_Survivor_Space                    false   
 10000  thrpt   25       0.142 ±   0.217  MB/sec
   ClosureBenchmark.test:·gc.churn.G1_Survivor_Space.norm               false   
 10000  thrpt   25       4.192 ±   6.411    B/op
   ClosureBenchmark.test:·gc.count                                      false   
 10000  thrpt   25       5.000            counts
   ClosureBenchmark.test:·gc.time                                       false   
 10000  thrpt   25      11.000                ms
   ```
   
   The test runs somewhat faster when it doesn't create a new lambda on each 
loop iteration (33571 op/sec versus 37243 op/sec). The allocation rate is 
vastly lower (4876 MB/sec versus 0.5 MB/sec).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to