gatesn opened a new issue, #13720:
URL: https://github.com/apache/datafusion/issues/13720

   ### Is your feature request related to a problem or challenge?
   
   The FileFormat trait has infer_schema and infer_stats calls that are given 
ObjectMetas, and then afterwards the create_physical_plan call returns an 
ExecutionPlan.
   
   First the ObjectMeta has a required size, implying either we externally know 
the object size or a HEAD request is made, even though we are often able to 
make a relative request from the end of the object to read stats + metadata.
   
   The infer_schema and infer_stats both independently open and read the file 
metadata (at least in the Parquet implementation), and without a custom 
`ParquetFileReaderFactory` the `ParquetExecBuilder` will open and read the 
metadata a third time.
   
   What would be the recommended way to carry the metadata through from the 
initial infer_schema call and reuse it for infer_stats and inside the execution 
plan? Should we have a session-scoped cache inside our `FileFormat` impl keyed 
by `ObjectMeta`? What would the recommended cache key be since that type 
doesn't impl Hash?
   
   Would it be better to pass `PartitionedFile` into infer_schema and 
infer_stats so we can stash the metadata inside the extensions field?
   
   Or should we avoid FileFormat entirely and go the route of a custom 
`TableProvider`?
   
   Thank you for your thoughts!
   
   ### Describe the solution you'd like
   
   _No response_
   
   ### Describe alternatives you've considered
   
   _No response_
   
   ### Additional context
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to