cloud-fan commented on a change in pull request #25651: [SPARK-28948][SQL]
Support passing all Table metadata in TableProvider
URL: https://github.com/apache/spark/pull/25651#discussion_r335262702
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableProvider.java
##########
@@ -36,26 +35,12 @@
public interface TableProvider {
/**
- * Return a {@link Table} instance to do read/write with user-specified
options.
+ * Return a {@link Table} instance with the given table options to do
read/write.
+ * Implementations should infer the table schema and partitioning.
*
* @param options the user-specified options that can identify a table, e.g.
file path, Kafka
* topic name, etc. It's an immutable case-insensitive
string-to-string map.
*/
+ // TODO: this should take a Map<String, String> as table properties.
Review comment:
> I don't think that partition inference needs to scan the entire file
system tree.
Spark needs to do it to get all the partition values and infer the schema.
This is an existing feature that Spark can infer a common type for partition
values with different types. The same applies to schema inference as well.
Spark can read parquet files of different but compatible schema, so Spark must
read all the files to infer the schema.
Can you share more about the static cache? Do you mean a global cache that
maps a directory to its listed files?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]