Lordworms commented on issue #9964:
URL: 
https://github.com/apache/arrow-datafusion/issues/9964#issuecomment-2062327479

   I have implemented a basic LRU metadata cache, and I think just caching the 
metadata would get slight performance improvement(we call the List_Object API 
just once but call the Get_Object API more than x times which x is the number 
of files), most of the time consumed should also be calling 
[get_object](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
 API since we should call it for each object. So I think caching the parquet 
file should be a better way since most of the time consumed is calling the 
get_range function(which calls the GetObject API and then reads the range of 
the parquet file)
   1. Should we do the caching for the parquet file?  I've seen the current 
logic of 
   2. How to implement the file cache, the current logic is complicated since 
not just the datafusion but the arrow-rs is calling get_range for example, in 
arrow readMetaData, it would call the  get_range API
   <img width="887" alt="image" 
src="https://github.com/apache/arrow-datafusion/assets/48054792/972476da-e307-42a0-8d39-7143212c67da";>
 also should we cache the whole parquet file or parts of it?
   3. which data structure should be chosen? I have tried the Sequence Trie and 
the LRU-DashMap.
   
   I have been confused for the last two days and really appreciate your help 
@alamb @matthewmturner 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to