aokolnychyi commented on a change in pull request #2210:
URL: https://github.com/apache/iceberg/pull/2210#discussion_r583254334
##########
File path: spark3/src/main/java/org/apache/iceberg/spark/Spark3Util.java
##########
@@ -790,4 +804,53 @@ public Identifier identifier() {
public static TableIdentifier identifierToTableIdentifier(Identifier
identifier) {
return TableIdentifier.of(Namespace.of(identifier.namespace()),
identifier.name());
}
+
+ /**
+ * Use Spark to list all partitions in the table.
+ *
+ * @param spark a Spark session
+ * @param rootPath a table identifier
+ * @param format format of the file
+ * @return all table's partitions
+ */
+ public static List<SparkTableUtil.SparkPartition> getPartitions(SparkSession
spark, Path rootPath, String format) {
+ FileStatusCache fileStatusCache = FileStatusCache.getOrCreate(spark);
+ Map<String, String> emptyMap = Collections.emptyMap();
+
+ InMemoryFileIndex fileIndex = new InMemoryFileIndex(
Review comment:
@rdblue, I think `parquet`.`path/to/table` would need to point to the
root table location all the time, just like in SQL on file in Spark. The
`partition` map would tell us which partitions to import.
Let's consider the following call:
```
iceberg.system.add_files(
source_table => `parquet`.`path/to/table`
table => 'iceberg.db.tbl',
partition => map('p1', 'v1', 'p2', 'v2')
)
```
I think the procedure would detect the identifier points to a path-based
Parquet table and would take the root table location as a basis and add a path
for the partition. In other words, the call above would be translated into
importing a single Parquet partition.
```
path/to/table/p1=v1/p2=v2
```
Does that make sense?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]