wsry commented on issue #11088: [FLINK-16012][runtime] Reduce the default 
number of buffers per channel from 2 to 1
URL: https://github.com/apache/flink/pull/11088#issuecomment-601651394
 
 
   Sorry for the late update. I implement an imbalance partitioner which emits 
```n``` to channel ```n``` and the following are the results (I ran the test 
multi-times and the results can be reproduced). After reducing the buffer per 
channel from 2 to 1 (including both upstream and downstream), if 8 floating 
buffers are used, there is still regression (about 10%), after I increase the 
floating buffers to 128 (64 is not enough and other values between 64 and 128 
are not tested), the performance seems catch up.
   
   @pnowojski @zhijiangW What do you think about the results, should be reduce 
both the upstream and downstream buffer per channel to 1 or we only reduce the 
downstream buffer (we may need to add a new config option)?
   
   **2 buffers per channel and 8 floating buffers per gate**
   ```
   Benchmark                                                                    
                    (channelsFlushTimeout)   Mode  Cnt      Score      Error   
Units
   
CustomPartitionerStreamNetworkThroughputBenchmarkExecutor.imbalancePartitionerNetworkThroughput
                     N/A  thrpt   30  34689.075 ± 4374.022  ops/ms
   DataSkewStreamNetworkThroughputBenchmarkExecutor.networkSkewedThroughput     
                                       N/A  thrpt   30  18461.286 ± 1127.107  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                                  1000,1ms  thrpt   30  23221.891 ± 4628.278  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                                1000,100ms  thrpt   30  29338.881 ± 1233.042  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                            1000,100ms,SSL  thrpt   30   7203.058 ±  822.253  
ops/ms
   ```
   
   **1 buffers per channel and 8 floating buffers per gate**
   ```
   Benchmark                                                                    
                    (channelsFlushTimeout)   Mode  Cnt      Score      Error   
Units
   
CustomPartitionerStreamNetworkThroughputBenchmarkExecutor.imbalancePartitionerNetworkThroughput
                     N/A  thrpt   30  29087.328 ± 4030.118  ops/ms
   DataSkewStreamNetworkThroughputBenchmarkExecutor.networkSkewedThroughput     
                                       N/A  thrpt   30  18973.591 ± 1241.189  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                                  1000,1ms  thrpt   30  22216.006 ± 1017.993  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                                1000,100ms  thrpt   30  23267.835 ±  976.012  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                            1000,100ms,SSL  thrpt   30   7403.758 ±  820.510  
ops/ms
   ```
   
   **1 buffers per channel and 128 floating buffers per gate**
   ```
   Benchmark                                                                    
                    (channelsFlushTimeout)   Mode  Cnt      Score      Error   
Units
   
CustomPartitionerStreamNetworkThroughputBenchmarkExecutor.imbalancePartitionerNetworkThroughput
                     N/A  thrpt   30  33130.029 ±  676.771  ops/ms
   DataSkewStreamNetworkThroughputBenchmarkExecutor.networkSkewedThroughput     
                                       N/A  thrpt   30  18229.952 ± 2121.084  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                                  1000,1ms  thrpt   30  21549.288 ± 2643.449  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                                1000,100ms  thrpt   30  25414.551 ± 1287.725  
ops/ms
   StreamNetworkThroughputBenchmarkExecutor.networkThroughput                   
                            1000,100ms,SSL  thrpt   30   7341.885 ±  872.192  
ops/ms
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to