steveloughran commented on PR #7214:
URL: https://github.com/apache/hadoop/pull/7214#issuecomment-2595139668

   @rajdchak thanks for the comments, will address
   
   I do want to pull up the vector IO support, with integration with prefetch 
and cacheing.
   
   For prefetch/caching stream we'd ask for a the requested ranges to be split 
up into
   
   1. ranges which were wholly in memory: satisfy immediately in current thread 
(or copier thread?)
   1. ranges which have an active prefetch to wholly satisfy the request: 
somehow wire prefetching up so as soon as it arrives, range gets the data.
   1. other ranges (not cached, prefetched or only partially in cache): 
coalesce as needed, then retrieve. +notify stream that these ranges are being 
fetched, so no need to prefetch
   
   It'd be good to collect stats on cache hit/miss here, to assess integration 
of vector reads with ranges. When a list of ranges comes down, there is less 
need to infer the next range and prefetch, and I'm not actually sure how 
important cacheing becomes. This is why setting parquet up to use vector IO 
already appears to give speedups comparable to the analytics stream benchmarks 
published.
   
   what I want is best of both worlds: prefetch of rowgroups from stream 
inference -and when vector reads come in, statisfy those by returning 
current/active prefetches, or retrieve new ranges through ranged GET requests.
   
   #7105 is where that will go; I've halted that until this is in. And I'll 
only worry about that integration with prefetched/cached blocks with the 
analytics stream.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to