kbendick commented on a change in pull request #1525:
URL: https://github.com/apache/iceberg/pull/1525#discussion_r497127475
##########
File path: spark/src/main/java/org/apache/iceberg/spark/SparkSchemaUtil.java
##########
@@ -63,6 +65,21 @@ public static Schema schemaForTable(SparkSession spark,
String name) {
return new Schema(converted.asNestedType().asStructType().fields());
}
+ /**
+ * Given a Spark table identifier, determine the PartitionSpec.
+ * @param spark the SparkSession which contains the identifier
+ * @param table a TableIdentifier, if the namespace is left blank the
catalog().currentDatabase() will be used
+ * @return a IcebergPartitionSpec representing the partitioning of the Spark
table
+ * @throws AnalysisException if thrown by the Spark catalog
+ */
+ public static PartitionSpec specForTable(SparkSession spark, TableIdentifier
table) throws AnalysisException {
+ String db = table.database().nonEmpty() ? table.database().get() :
spark.catalog().currentDatabase();
+ PartitionSpec spec = identitySpec(
+ schemaForTable(spark, table.unquotedString()),
Review comment:
I figured once I typed it out. I should really start deleting my
comments once I come to the answer rubber duck debugging with myself in the
github comments 😅 .
##########
File path: spark/src/main/java/org/apache/iceberg/spark/SparkSchemaUtil.java
##########
@@ -63,6 +65,21 @@ public static Schema schemaForTable(SparkSession spark,
String name) {
return new Schema(converted.asNestedType().asStructType().fields());
}
+ /**
+ * Given a Spark table identifier, determine the PartitionSpec.
+ * @param spark the SparkSession which contains the identifier
+ * @param table a TableIdentifier, if the namespace is left blank the
catalog().currentDatabase() will be used
+ * @return a IcebergPartitionSpec representing the partitioning of the Spark
table
+ * @throws AnalysisException if thrown by the Spark catalog
+ */
+ public static PartitionSpec specForTable(SparkSession spark, TableIdentifier
table) throws AnalysisException {
+ String db = table.database().nonEmpty() ? table.database().get() :
spark.catalog().currentDatabase();
+ PartitionSpec spec = identitySpec(
+ schemaForTable(spark, table.unquotedString()),
+ spark.catalog().listColumns(db, table.table()).collectAsList());
+ return spec == null ? PartitionSpec.unpartitioned() : spec;
Review comment:
So I've thought about this more, and I think that what might be missing
/ what is bothering me is honestly just an empty line between the doc comment
and the section for params. 😅
So definitely file that one under `nits`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]