Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19796#discussion_r230609716
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala ---
@@ -411,7 +410,29 @@ abstract class Catalog {
tableName: String,
source: String,
schema: StructType,
- options: Map[String, String]): DataFrame
+ options: Map[String, String]): DataFrame = {
+ createTable(tableName, source, schema, options, Nil)
+ }
+
+ /**
+ * :: Experimental ::
+ * (Scala-specific)
+ * Create a table based on the dataset in a data source, a schema, a
set of options and a set of partition columns.
+ * Then, returns the corresponding DataFrame.
+ *
+ * @param tableName is either a qualified or unqualified name that
designates a table.
+ * If no database identifier is provided, it refers to
a table in
+ * the current database.
+ * @since ???
+ */
+ @Experimental
+ @InterfaceStability.Evolving
+ def createTable(
+ tableName: String,
+ source: String,
+ schema: StructType,
+ options: Map[String, String],
+ partitionColumnNames : Seq[String]): DataFrame
--- End diff --
I think we will not introduce a new API for partitioning columns in the
current stage. Let us use SQL DDL for creating the table.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]