kirktrue commented on PR #20324:
URL: https://github.com/apache/kafka/pull/20324#issuecomment-3251191351

   > I definitely like the idea of having a consolidated 
`DoEverythingNeededForFetchEvent` that handles the logic that's currently split 
up across `PollEvent`, `CheckAndUpdatePositionsEvent`, and 
`CreateFetchRequestsEvent`. Unfortunately, I see two constraints that prevent 
the ability to consolidate those events:
   > 
   > 1. Offset commit callbacks, background events, and collecting fetch data 
must be executed in the application thread
   > 2. The execution of those callbacks and events in the application thread 
are interleaved with the event logic executed in the background thread for 
`PollEvent`, `CheckAndUpdatePositionsEvent`, and `CreateFetchRequestsEvent`
   > 
   > If there's a way around these two issues, there's hope. I just don't yet 
see a way around those constraints without a major change 😞
   
   In thinking about the consolidated event approach a little more, I think 
there's a possible way forward. Here are some assumptions:
   
   1. I _assume_ that the majority of `poll()` executions execute neither async 
commit offset callbacks nor background events. i.e., I _assume_ these are 
_comparatively_ rare when it comes to polling.
   2. I _assume_ that there's a way to discern, from the background thread, if 
we need to execute async commit offset callbacks nor background events.
   3. I _assume_ the `DoEverythingNeededForFetchEvent` event processing could 
be prematurely exit if it detects the need for the application thread’s help.
   
   Given the above, can we introduce `DoEverythingNeededForFetchEvent` as an 
optimized "fast path" that can execute the bulk of fetch uninterrupted in the 
background thread, but still have a "slow path" for when we need to fall back 
to using one or both of `PollEvent` or `CheckAndUpdatePositionsEvent`, when 
needed?
   
   There's still another piece to the puzzle that I haven't figured out yet, 
which is how to handle the `CreateFetchRequestsEvent`. The 
`FetchCollector.collectFetch()` logic runs on the application thread because it 
may need to decrypt and/or decompress data from the buffer to determine if 
there are any returnable records. Maybe we just assume that if there's any data 
in the fetch buffer that we shouldn't execute the `CreateFetchRequestsEvent` 
logic? Can we _assume_ that most of the time if there's data in the fetch 
buffer that it's usable, and if we later try to get it out and it's bogus, we 
just have to fall back to the slow path?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to