merlimat opened a new pull request, #3837:
URL: https://github.com/apache/bookkeeper/pull/3837

   ### Motivation
   
   Note: this is stacked on top of #3830 & #3835
   
   This change improves the way the AddRequests responses are send to client. 
   
   The current flow is: 
    * The journal-force-thread issues the fsync on the journal file
    * We iterate over all the entries that were just synced and for each of 
them:
        1. Trigger channel.writeAndFlus()
        2. This will jump on the connection IO thread (Netty will use a 
`write()` to `eventfd` to post the task and wake the epoll)
        3. Write the object in the connection and trigger the serialization 
logic
        4. Grab a `ByteBuf` from the pool and write ~20 bytes with the response
        5. Write and flush the buffer on the channel
        6. With the flush consolidator we try to group multiple buffer into a 
single `writev()` syscall, though each call will have a long list of buffer, 
making the memcpy inefficient.
        7. Release all the buffers and return them to the pool
   
   All these steps are quite expensive when the bookie is receiving a lot of 
small requests. 
   
   This PR changes the flow into: 
   
   1. journal fsync
   2. go through each request and prepare the response into a per-connection 
`ByteBuf` which is not written on the channel as of yet
   3. after preparing all the responses, we flush them at once: Trigger an 
event on all the connections that will write the accumulated buffers.
   
   The advantages are: 
    1. 1 ByteBuf allocated per connection instead of 1 per request
       1. Less allocations and stress of buffer pool
       2. More efficient socket write() operations
    3. 1 task per connection posted on the Netty IO threads, instead of 1 per 
request.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to