rdblue commented on a change in pull request #1783:
URL: https://github.com/apache/iceberg/pull/1783#discussion_r539490195
##########
File path:
spark3/src/main/java/org/apache/iceberg/spark/source/IcebergSource.java
##########
@@ -19,22 +19,49 @@
package org.apache.iceberg.spark.source;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
import java.util.Map;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.iceberg.Table;
-import org.apache.iceberg.catalog.TableIdentifier;
-import org.apache.iceberg.hadoop.HadoopTables;
-import org.apache.iceberg.hive.HiveCatalog;
-import org.apache.iceberg.hive.HiveCatalogs;
import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
+import org.apache.iceberg.spark.PathIdentifier;
+import org.apache.iceberg.spark.Spark3Util;
+import org.apache.iceberg.spark.SparkCatalog;
+import org.apache.iceberg.spark.SparkSessionCatalog;
import org.apache.spark.sql.SparkSession;
-import org.apache.spark.sql.connector.catalog.TableProvider;
+import org.apache.spark.sql.catalyst.analysis.NoSuchTableException;
+import org.apache.spark.sql.catalyst.parser.ParseException;
+import org.apache.spark.sql.connector.catalog.CatalogManager;
+import org.apache.spark.sql.connector.catalog.CatalogPlugin;
+import org.apache.spark.sql.connector.catalog.Identifier;
+import org.apache.spark.sql.connector.catalog.SupportsCatalogOptions;
+import org.apache.spark.sql.connector.catalog.Table;
+import org.apache.spark.sql.connector.catalog.TableCatalog;
import org.apache.spark.sql.connector.expressions.Transform;
import org.apache.spark.sql.sources.DataSourceRegister;
import org.apache.spark.sql.types.StructType;
import org.apache.spark.sql.util.CaseInsensitiveStringMap;
-public class IcebergSource implements DataSourceRegister, TableProvider {
+/**
+ * The IcebergSource loads/writes tables with format "iceberg". It can load
paths and tables.
+ *
+ * How paths/tables are loaded when using
spark.read().format("iceberg").path(table)
+ *
+ * table = "file:/path/to/table" -> loads a HadoopTable at given path
+ * table = "catalog.`file:/path/to/table`" -> loads a HadoopTable at given
path using settings from 'catalog'
+ * table = "catalog.namespace.`file:/path/to/table`" -> fails. Namespace
doesn't exist for paths
+ * table = "tablename" -> loads currentCatalog.currentNamespace.tablename
+ * table = "xxx.tablename" -> if xxx is a catalog load "tablename" from the
specified catalog. Otherwise
+ * load "xxx.tablename" from current catalog
+ * table = "xxx.yyy.tablename" -> if xxx is a catalog load "yyy.tablename"
from the specified catalog. Otherwise
+ * load "xxx.yyy.tablename" from current catalog
Review comment:
In these examples, I think it is easier to understand if you separate
the cases and use `catalog` and `namespace`, rather than "if xxx is a catalog .
. . otherwise . . .".
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]