aokolnychyi commented on a change in pull request #2210:
URL: https://github.com/apache/iceberg/pull/2210#discussion_r591013852
##########
File path: spark3/src/main/java/org/apache/iceberg/spark/Spark3Util.java
##########
@@ -790,4 +803,66 @@ public Identifier identifier() {
public static TableIdentifier identifierToTableIdentifier(Identifier
identifier) {
return TableIdentifier.of(Namespace.of(identifier.namespace()),
identifier.name());
}
+
+ /**
+ * Use Spark to list all partitions in the table.
+ *
+ * @param spark a Spark session
+ * @param rootPath a table identifier
+ * @param format format of the file
+ * @return all table's partitions
+ */
+ public static List<SparkPartition> getPartitions(SparkSession spark, Path
rootPath, String format) {
+ FileStatusCache fileStatusCache = FileStatusCache.getOrCreate(spark);
+ Map<String, String> emptyMap = Collections.emptyMap();
+
+ InMemoryFileIndex fileIndex = new InMemoryFileIndex(
+ spark,
+ JavaConverters
+ .collectionAsScalaIterableConverter(ImmutableList.of(rootPath))
+ .asScala()
+ .toSeq(),
+ JavaConverters
+ .mapAsScalaMapConverter(emptyMap)
+ .asScala()
+ .toMap(Predef.<Tuple2<String, String>>conforms()),
+ Option.empty(),
+ fileStatusCache,
+ Option.empty(),
+ Option.empty());
+
+ org.apache.spark.sql.execution.datasources.PartitionSpec spec =
fileIndex.partitionSpec();
+ StructType schema = spec.partitionColumns();
+ if (spec.partitions().isEmpty()) {
+ return ImmutableList.of(new SparkPartition(Collections.emptyMap(),
rootPath.toString(), format));
+ }
+
+ return JavaConverters
+ .seqAsJavaListConverter(spec.partitions())
+ .asJava()
+ .stream()
+ .map(partition -> {
+ Map<String, String> values = new HashMap<>();
+
JavaConverters.asJavaIterableConverter(schema).asJava().forEach(field -> {
+ int fieldIndex = schema.fieldIndex(field.name());
+ Object catalystValue = partition.values().get(fieldIndex,
field.dataType());
+ Object value =
CatalystTypeConverters.convertToScala(catalystValue, field.dataType());
+ values.put(field.name(), value.toString());
+ });
+ return new SparkPartition(values, partition.path().toString(),
format);
+ }).collect(Collectors.toList());
+ }
+
+ public static org.apache.spark.sql.catalyst.TableIdentifier
toV1TableIdentifier(Identifier identifier) {
+ Preconditions.checkArgument(identifier.namespace().length <= 1,
+ "Cannot load a session catalog namespace with more than 1 part. Given
%s", identifier);
Review comment:
nit: I feel like the error message here can be improved as we don't have
any loading in this method.
```
Cannot convert %s to a v1 identifier; namespace contains more than 1 part
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]