aokolnychyi commented on a change in pull request #1784:
URL: https://github.com/apache/iceberg/pull/1784#discussion_r527203140



##########
File path: spark3/src/main/java/org/apache/iceberg/spark/Spark3Util.java
##########
@@ -548,4 +565,83 @@ private static String 
sqlString(org.apache.iceberg.expressions.Literal<?> lit) {
       }
     }
   }
+
+  /*
+   * Because Spark does not allow more than 1 piece in the namespace for a 
Session Catalog table, we circumvent
+   * the entire resolution path for tables and instead look up the table 
directly ourselves. This lets us correctly
+   * get metadata tables for the SessionCatalog, if we didn't have to work 
around this we could just use spark.table.
+   */
+  private static Dataset<Row> loadCatalogMetadataTable(SparkSession spark, 
String name, MetadataTableType type)
+      throws CatalogNotFoundException, ParseException, NoSuchTableException {
+
+    CatalogAndIdentifier catalogAndIdentifier = catalogAndIdentifier(spark, 
name);
+    if (catalogAndIdentifier.catalog instanceof BaseCatalog) {
+      BaseCatalog catalog = (BaseCatalog) catalogAndIdentifier.catalog;
+      Identifier baseIdent = catalogAndIdentifier.identifier;
+      Identifier metaIdent = 
Identifier.of(ArrayUtils.add(baseIdent.namespace(), baseIdent.name()), 
type.name());
+      Table metaTable = catalog.loadTable(metaIdent);
+      return Dataset.ofRows(spark, DataSourceV2Relation.create(metaTable, 
Some.apply(catalog), Some.apply(metaIdent)));
+    } else {
+      throw new CatalogNotFoundException(String.format("Cannot cast %s as an 
Iceberg catalog",
+          catalogAndIdentifier.catalog.name()));
+    }
+  }
+
+  public static CatalogAndIdentifier catalogAndIdentifier(SparkSession spark, 
String name) throws ParseException {
+    return catalogAndIdentifier(spark,
+          
JavaConverters.seqAsJavaList(spark.sessionState().sqlParser().parseMultipartIdentifier(name)));
+  }
+
+  /**
+   * A modified version of Spark's LookupCatalog.CatalogAndIdentifier.unapply
+   * Attempts to find the catalog and identifier a multipart identifier 
represents
+   * @param spark Spark session to use for resolution
+   * @param nameParts Multipart identifier representing a table
+   * @return The CatalogPlugin and Identifier for the table
+   */
+  public static CatalogAndIdentifier catalogAndIdentifier(SparkSession spark, 
List<String> nameParts) {
+    Seq<String> namePartsSeq = 
JavaConverters.asScalaIterator(nameParts.iterator()).toSeq();
+    Preconditions.checkArgument(namePartsSeq.nonEmpty(),
+        "Cannot determine catalog and Identifier from empty name parts");
+    CatalogPlugin currentCatalog = 
spark.sessionState().catalogManager().currentCatalog();
+    String[] currentNamespace = 
spark.sessionState().catalogManager().currentNamespace();
+    if (namePartsSeq.length() == 1) {
+      return new CatalogAndIdentifier(currentCatalog, 
Identifier.of(currentNamespace, namePartsSeq.head()));
+    } else {
+      try {
+        CatalogPlugin namedCatalog = 
spark.sessionState().catalogManager().catalog(namePartsSeq.head());
+        return new CatalogAndIdentifier(namedCatalog,
+            CatalogV2Implicits.MultipartIdentifierHelper((Seq<String>) 
namePartsSeq.tail()).asIdentifier());
+      } catch (Exception e) {
+        return new CatalogAndIdentifier(currentCatalog,
+            
CatalogV2Implicits.MultipartIdentifierHelper(namePartsSeq).asIdentifier());
+      }
+    }
+  }
+
+  public static TableIdentifier toTableIdentifier(Identifier table) {
+    return new CatalogV2Implicits.IdentifierHelper(table).asTableIdentifier();

Review comment:
       How stable do we expect implicits to be? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to