rdblue commented on a change in pull request #25651: [SPARK-28948][SQL] Support 
passing all Table metadata in TableProvider
URL: https://github.com/apache/spark/pull/25651#discussion_r335224387
 
 

 ##########
 File path: 
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableProvider.java
 ##########
 @@ -36,26 +35,12 @@
 public interface TableProvider {
 
   /**
-   * Return a {@link Table} instance to do read/write with user-specified 
options.
+   * Return a {@link Table} instance with the given table options to do 
read/write.
+   * Implementations should infer the table schema and partitioning.
    *
    * @param options the user-specified options that can identify a table, e.g. 
file path, Kafka
    *                topic name, etc. It's an immutable case-insensitive 
string-to-string map.
    */
+  // TODO: this should take a Map<String, String> as table properties.
 
 Review comment:
   A TableProvider could use a static cache. We do this for our Hive client 
pool in Iceberg.
   
   I don't think that partition inference needs to scan the entire file system 
tree. That seems needlessly expensive to me, when mixed depth partitions are 
not allowed. Inference should find the first live data file and use its path to 
determine the partitioning.
   
   In any case, I think that the plan to add `inferSchema` and 
`inferPartitioning` is fine, it is just fixing the implementation that needs to 
be done. Schema and partitioning inference is known to be expensive, so I'm not 
sure how much sense it makes to try to over-optimize here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to