rdblue commented on a change in pull request #25651: [SPARK-28948][SQL] Support
passing all Table metadata in TableProvider
URL: https://github.com/apache/spark/pull/25651#discussion_r328340643
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableProvider.java
##########
@@ -36,26 +35,12 @@
public interface TableProvider {
/**
- * Return a {@link Table} instance to do read/write with user-specified
options.
+ * Return a {@link Table} instance with the given table options to do
read/write.
+ * Implementations should infer the table schema and partitioning.
*
* @param options the user-specified options that can identify a table, e.g.
file path, Kafka
* topic name, etc. It's an immutable case-insensitive
string-to-string map.
*/
+ // TODO: this should take a Map<String, String> as table properties.
Review comment:
Another option for avoiding this is to separate the schema and partition
inference from the `getTable` method. In that case, `TableProvider` would
expose `inferSchema(CaseInsensitiveStringMap)` and
`inferPartitioning(CaseInsensitiveStringMap)`. Then a single `getTable` call
could be used.
```java
interface TableProvider {
StructType inferSchema(CaseInsensitiveStringMap options);
Transform[] inferPartitioning(CaseInsensitiveStringMap options);
Table buildTable(StructType schema, Transform[] partitioning, Map<String,
String> properties);
}
```
What do you think?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]