rdblue commented on a change in pull request #3459:
URL: https://github.com/apache/iceberg/pull/3459#discussion_r744334604



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java
##########
@@ -298,6 +317,97 @@ public void deleteWhere(Filter[] filters) {
     }
   }
 
+  @Override
+  public StructType partitionSchema() {
+    List<Types.NestedField> fields = 
icebergTable.spec().partitionType().fields();
+    StructField[] structFields = new StructField[fields.size()];
+    int index = 0;
+    for (Types.NestedField field : fields) {
+      StructField structField = new StructField(field.name(), 
SparkSchemaUtil.convert(field.type()), true,
+          Metadata.empty());
+      structFields[index] = structField;
+      ++index;
+    }
+    return new StructType(structFields);
+  }
+
+  @Override
+  public void createPartition(InternalRow ident, Map<String, String> 
properties)
+      throws PartitionAlreadyExistsException, UnsupportedOperationException {
+    throw new UnsupportedOperationException("not support create partition, use 
addFile procedure to refresh");
+  }
+
+  @Override
+  public boolean dropPartition(InternalRow ident) {
+    throw new UnsupportedOperationException("not support drop partition, use 
delete sql instead of it");
+  }
+
+  @Override
+  public void replacePartitionMetadata(InternalRow ident, Map<String, String> 
properties)
+      throws NoSuchPartitionException, UnsupportedOperationException {

Review comment:
       Can you remove any `throws` exceptions that are not actually thrown or 
that are not checked exceptions?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to