felixYyu commented on a change in pull request #3862:
URL: https://github.com/apache/iceberg/pull/3862#discussion_r805631668



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java
##########
@@ -272,6 +290,80 @@ public void deleteWhere(Filter[] filters) {
     }
   }
 
+  @Override
+  public StructType partitionSchema() {
+    Types.StructType structType = Partitioning.partitionType(table());
+    List<Types.NestedField> structFields = 
Lists.newArrayListWithExpectedSize(structType.fields().size());
+    structType.fields().forEach(nestedField -> {
+      if (nestedField.name().endsWith("hour") ||
+              nestedField.name().endsWith("month") ||
+              nestedField.name().endsWith("year")) {
+        structFields.add(Types.NestedField.optional(nestedField.fieldId(), 
nestedField.name(), Types.StringType.get()));
+      } else {
+        // do nothing
+        structFields.add(nestedField);
+      }
+    });
+
+    return (StructType) 
SparkSchemaUtil.convert(Types.StructType.of(structFields));
+  }
+
+  @Override
+  public void createPartition(InternalRow ident, Map<String, String> 
properties) throws UnsupportedOperationException {
+    throw new UnsupportedOperationException("Cannot explicitly create 
partitions in Iceberg tables");
+  }
+
+  @Override
+  public boolean dropPartition(InternalRow ident) {
+    throw new UnsupportedOperationException("Cannot explicitly drop partitions 
in Iceberg tables");
+  }
+
+  @Override
+  public void replacePartitionMetadata(InternalRow ident, Map<String, String> 
properties)
+          throws UnsupportedOperationException {
+    throw new UnsupportedOperationException("Iceberg partitions do not support 
metadata");
+  }
+
+  @Override
+  public Map<String, String> loadPartitionMetadata(InternalRow ident) throws 
UnsupportedOperationException {
+    throw new UnsupportedOperationException("Iceberg partitions do not support 
metadata");
+  }
+
+  @Override
+  public InternalRow[] listPartitionIdentifiers(String[] names, InternalRow 
ident) {
+    // support show partitions
+    List<InternalRow> rows = Lists.newArrayList();
+    Dataset<Row> df = SparkTableUtil.loadMetadataTable(sparkSession(), 
icebergTable, MetadataTableType.PARTITIONS)
+            .select("partition");

Review comment:
       fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to