aokolnychyi commented on a change in pull request #2210:
URL: https://github.com/apache/iceberg/pull/2210#discussion_r580599502



##########
File path: spark3/src/main/java/org/apache/iceberg/spark/Spark3Util.java
##########
@@ -790,4 +804,53 @@ public Identifier identifier() {
   public static TableIdentifier identifierToTableIdentifier(Identifier 
identifier) {
     return TableIdentifier.of(Namespace.of(identifier.namespace()), 
identifier.name());
   }
+
+  /**
+   * Use Spark to list all partitions in the table.
+   *
+   * @param spark a Spark session
+   * @param rootPath a table identifier
+   * @param format format of the file
+   * @return all table's partitions
+   */
+  public static List<SparkTableUtil.SparkPartition> getPartitions(SparkSession 
spark, Path rootPath, String format) {
+    FileStatusCache fileStatusCache = FileStatusCache.getOrCreate(spark);
+    Map<String, String> emptyMap = Collections.emptyMap();
+
+    InMemoryFileIndex fileIndex = new InMemoryFileIndex(

Review comment:
       @rdsr, I think the procedure can load the Iceberg table and use its spec 
to determine the correct order of partition columns. That seems like the most 
reliable way and does not require parsing partition strings.
   
   @pvary brought up a good point about custom partition locations in 
[this](https://github.com/apache/iceberg/pull/2210#discussion_r578317436) 
comment. 
   
   The API I proposed in the comment above won't work for that. @RussellSpitzer 
and I had a quick chat and it seems we can support the following:
   
   ```
   iceberg.system.add_files(
     source_table => 'source_table' -- required
     table => 'db.tbl', -- required
     partition => map('p1', '*', 'p2', 'v2') -- optional
   )
   ```
   
   The procedure can be called like this:
   
   ```
   -- path-based
   iceberg.system.add_files(
     source_table => `parquet`.`path/to/table`
     table => 'iceberg.db.tbl',
     partition => map('p1', '*', 'p2', 'v2')
   )
   ```
   
   ```
   -- metastore-based
   iceberg.system.add_files(
     source_table => `db.hive_tbl`
     table => 'db.iceberg_tbl',
     partition => map('p1', '*', 'p2', 'v2')
   )
   ```
   
   This way, we should support custom partition locations as we will go to HMS 
for Hive tables.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to