Lordworms commented on issue #9964:
URL: 
https://github.com/apache/arrow-datafusion/issues/9964#issuecomment-2057936131

   Sorry for the late update, for a deeper analysis, I think a direct call to 
get ObjectStore (direct call, no pruning) could not be optimized by caching the 
MetaData. Currently, we mainly use two APIs from S3, the 
[list_object](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)
 and the 
[get_object](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
 I looked into the code and did analysis with WireShark and find out current 
data fusion just do one list_object call and x(number of file) get_object call. 
I don't think these could be avoided by adding caches since in 
[fetch_parquet_metadata](https://github.com/apache/arrow-datafusion/blob/main/datafusion/core/src/datasource/file_format/parquet.rs#L389-L401)
 we need the actual parquet file body instead of ObjectMeta. I'll check the 
pruning issue to see if I can make some effort.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to