aokolnychyi commented on a change in pull request #2210:
URL: https://github.com/apache/iceberg/pull/2210#discussion_r571191846



##########
File path: spark3/src/main/java/org/apache/iceberg/spark/Spark3Util.java
##########
@@ -790,4 +804,53 @@ public Identifier identifier() {
   public static TableIdentifier identifierToTableIdentifier(Identifier 
identifier) {
     return TableIdentifier.of(Namespace.of(identifier.namespace()), 
identifier.name());
   }
+
+  /**
+   * Use Spark to list all partitions in the table.
+   *
+   * @param spark a Spark session
+   * @param rootPath a table identifier
+   * @param format format of the file
+   * @return all table's partitions
+   */
+  public static List<SparkTableUtil.SparkPartition> getPartitions(SparkSession 
spark, Path rootPath, String format) {
+    FileStatusCache fileStatusCache = FileStatusCache.getOrCreate(spark);
+    Map<String, String> emptyMap = Collections.emptyMap();
+
+    InMemoryFileIndex fileIndex = new InMemoryFileIndex(

Review comment:
       Could we use reflection for this? Seems we had a similar issue in 
`HiveClientPool`.
   
   I understand `InMemoryFileIndex` is a low-level API and it may not be the 
best idea to rely on it. I'm ok with this as the first step. We may build our 
own logic if we need it in other query engines.
   
   I'd be also alright to always require a specific partition to import and not 
support importing all partitions in a given path, which should avoid the need 
to use Spark's `InMemoryFileIndex` to infer the underlying partitioning. As I 
wrote before, I think the primary use case for this is CREATE TABLE LIKE and 
then calling `add_files` for a couple of partitions to try reads/writes or 
migrating partition by partition if a table has too many partitions to migrate 
them at once. In both cases, there is a specific partition we have to import.
   
   If there is a need to import files in a given location, one can create an 
external Spark table pointing to the location, call `MSCK REPAIR TABLE` to 
infer partitions and add them to the metastore and then call either snapshot or 
migrate procedure.
   
   What are your thoughts, @RussellSpitzer @rdblue?
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to