maskit opened a new pull request #7282:
URL: https://github.com/apache/trafficserver/pull/7282


   H1ClientSession can simply pass large (1MB) buffers that come from Cache to 
`SSLWriteBuffer` (== `SSL_write`), because there's no framing on H1. On the 
other hand, H2ClientSession can't do the same, because data has to be split 
into DATA frames (frame headers have to be inserted). During the framing, small 
(16KB) buffers are made on each DATA frame and H2ClientSession passes those to 
`SSLWriteBuffers`. This means that we call `SSLWriteBuffer` 64 times more on H2.
   
   This change reduces the number of write operation (`SSLWriteBuffer`) on H2 
by increasing the buffer (block) size and storing multiple DATA frames into the 
single large buffer.
   
   Here's benchmark result on my laptop:
   
   `$ time h2load -n10000 -c10 -m10 https://localhost:8443/8m`
   
   Before:
   ```
   finished in 58.30s, 171.54 req/s, 1.34GB/s
   requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 
0 errored, 0 timeout
   status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
   traffic: 78.17GB (83932534571) total, 101.64KB (104077) headers (space 
savings 96.71%), 78.13GB (83886080000) data
                        min         max         mean         sd        +/- sd
   time for request:    18.00ms       5.53s    483.43ms    516.19ms    93.72%
   time for connect:     2.92ms      8.21ms      5.14ms      1.68ms    70.00%
   time to 1st byte:    44.94ms    164.22ms    116.20ms     33.34ms    70.00%
   req/s           :      17.15       29.08       20.61        3.75    80.00%
   
   real    0m58.313s
   user    0m40.274s
   sys     0m17.108s
   ```
   
   After:
   ```
   finished in 50.54s, 197.86 req/s, 1.55GB/s
   requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 
0 errored, 0 timeout
   status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
   traffic: 78.17GB (83932534054) total, 101.14KB (103564) headers (space 
savings 96.72%), 78.13GB (83886080000) data
                        min         max         mean         sd        +/- sd
   time for request:    14.06ms       6.96s    416.36ms    559.53ms    95.08%
   time for connect:     2.75ms      7.41ms      4.89ms      1.48ms    60.00%
   time to 1st byte:    30.32ms    115.90ms     79.89ms     31.21ms    50.00%
   req/s           :      19.79       28.42       23.34        3.79    60.00%
   
   real    0m50.556s
   user    0m36.718s
   sys     0m13.533s
   ```
   
   For reference, this is H1 result on the same condition:
   ```
   finished in 50.20s, 199.18 req/s, 1.56GB/s
   requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 
0 errored, 0 timeout
   status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
   traffic: 78.13GB (83889917615) total, 3.10MB (3247615) headers (space 
savings 0.00%), 78.13GB (83886080000) data
                        min         max         mean         sd        +/- sd
   time for request:    15.03ms      18.19s    416.14ms    803.15ms    98.16%
   time for connect:     3.57ms     10.81ms      6.10ms      2.10ms    80.00%
   time to 1st byte:     6.88ms     27.72ms     13.49ms      6.91ms    80.00%
   req/s           :      19.92       26.83       24.18        2.27    70.00%
   
   real    0m50.226s
   user    0m36.963s
   sys     0m12.938s
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to