rdblue commented on a change in pull request #1875:
URL: https://github.com/apache/iceberg/pull/1875#discussion_r547446128
##########
File path: spark/src/main/java/org/apache/iceberg/spark/SparkUtil.java
##########
@@ -61,4 +65,40 @@ public static void validatePartitionTransforms(PartitionSpec
spec) {
String.format("Cannot write using unsupported transforms: %s",
unsupported));
}
}
+
+ /**
+ * A modified version of Spark's LookupCatalog.CatalogAndIdentifier.unapply
+ * Attempts to find the catalog and identifier a multipart identifier
represents
+ * @param nameParts Multipart identifier representing a table
+ * @return The CatalogPlugin and Identifier for the table
+ */
+ public static <C, T> Pair<C, T> catalogAndIdentifier(List<String> nameParts,
+ Function<String, C>
catalog,
+ IdentiferFunction<T>
identifer,
+ String[]
currentNamespace) {
+ Preconditions.checkArgument(!nameParts.isEmpty(),
+ "Cannot determine catalog and Identifier from empty name parts");
+
+ int lastElementIndex = nameParts.size() - 1;
+ String name = nameParts.get(lastElementIndex);
+
+ if (nameParts.size() == 1) {
+ // Only a single element, use current catalog and namespace
+ return Pair.of(catalog.apply(null), identifer.of(currentNamespace,
name));
+ } else {
+ try {
Review comment:
Rather than try/catch, I think this should check whether `catalog.apply`
returns null. If the result is null, then the catalog does not exist and it
should not be set in the pair (set null). Then the caller can fill in the
default catalog.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]