rdblue commented on a change in pull request #1525:
URL: https://github.com/apache/iceberg/pull/1525#discussion_r500669524



##########
File path: spark/src/main/java/org/apache/iceberg/spark/SparkSchemaUtil.java
##########
@@ -63,6 +65,21 @@ public static Schema schemaForTable(SparkSession spark, 
String name) {
     return new Schema(converted.asNestedType().asStructType().fields());
   }
 
+  /**
+   * Given a Spark table identifier, determine the PartitionSpec.
+   * @param spark the SparkSession which contains the identifier
+   * @param table a TableIdentifier, if the namespace is left blank the 
catalog().currentDatabase() will be used
+   * @return a IcebergPartitionSpec representing the partitioning of the Spark 
table
+   * @throws AnalysisException if thrown by the Spark catalog
+   */
+  public static PartitionSpec specForTable(SparkSession spark, TableIdentifier 
table) throws AnalysisException {

Review comment:
       I don't think that we should leak the v1 Spark API (`TableIdentifier`) 
in a util class like this. What about passing database and name separately? 
Then the caller is responsible for adding database, which avoids the need to 
use v1 `spark.catalog()` to default it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to