rdblue commented on a change in pull request #2210:
URL: https://github.com/apache/iceberg/pull/2210#discussion_r583267608
##########
File path: spark3/src/main/java/org/apache/iceberg/spark/Spark3Util.java
##########
@@ -790,4 +804,53 @@ public Identifier identifier() {
public static TableIdentifier identifierToTableIdentifier(Identifier
identifier) {
return TableIdentifier.of(Namespace.of(identifier.namespace()),
identifier.name());
}
+
+ /**
+ * Use Spark to list all partitions in the table.
+ *
+ * @param spark a Spark session
+ * @param rootPath a table identifier
+ * @param format format of the file
+ * @return all table's partitions
+ */
+ public static List<SparkTableUtil.SparkPartition> getPartitions(SparkSession
spark, Path rootPath, String format) {
+ FileStatusCache fileStatusCache = FileStatusCache.getOrCreate(spark);
+ Map<String, String> emptyMap = Collections.emptyMap();
+
+ InMemoryFileIndex fileIndex = new InMemoryFileIndex(
Review comment:
I think that Spark allows pointing to a location that is inside a
path-based table. So if you pointed to p1=x, then you'd get p2 and p3 columns,
but not p1. We can test that, but that's what I'm concerned about. If we need a
value for p1, but don't get it from Spark then there is no way to add it
besides including it in the partition map.
Maybe using the map as a filter like you suggest would work, but that seems
a bit strange to me because it is sometimes supplying partition values and
sometimes getting those values from the path.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]