rdblue commented on a change in pull request #1875:
URL: https://github.com/apache/iceberg/pull/1875#discussion_r547447774
##########
File path: spark/src/main/java/org/apache/iceberg/spark/SparkUtil.java
##########
@@ -61,4 +65,40 @@ public static void validatePartitionTransforms(PartitionSpec
spec) {
String.format("Cannot write using unsupported transforms: %s",
unsupported));
}
}
+
+ /**
+ * A modified version of Spark's LookupCatalog.CatalogAndIdentifier.unapply
+ * Attempts to find the catalog and identifier a multipart identifier
represents
+ * @param nameParts Multipart identifier representing a table
+ * @return The CatalogPlugin and Identifier for the table
+ */
+ public static <C, T> Pair<C, T> catalogAndIdentifier(List<String> nameParts,
Review comment:
I think that the logic here should be identical to the Spark 3 case, but
with the catalog load function and identifier construction replaced. That
doesn't appear to be what is done because `catalog.apply` is used when the
catalog is not set (1 part name).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]